
After two milestones of a lightweight Java library providing one API across multiple AI providers, 1.0-M1: One API, any AI and 1.0-M2: Real-time AI, Your Way, today the library graduates to its first stable release. And comes with a new name: OmniHai.
Why "OmniHai"?
The rename from OmniAI to OmniHai was necessary because "OmniAI" was already used by several other products, making it difficult to discover, search for, and distinguish. The new name keeps "AI" clearly audible and visible, "Hai" sounds like AI, while being more memorable, more brandable, and actually findable on search engines. Also, "Hai" is Japanese for "yes", which felt fitting, one yes to any AI provider.
The Maven coordinates are now:
<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnihai</artifactId>
<version>1.0</version>
</dependency>
What's New in 1.0
Since the second milestone previous week, five major features were added: structured outputs, file attachments, conversation memory, proofreading, and MicroProfile Config support.
Structured Outputs
This is probably the most impactful addition. Instead of parsing AI responses as free-text strings, you can now get typed Java objects directly:
record ProductReview(String sentiment, int rating, List<String> pros, List<String> cons) {}
ProductReview review = service.chat("Analyze this review: " + reviewText, ProductReview.class);
Under the hood, OmniHai generates a JSON schema from your record (or bean) class, instructs the AI to return conforming JSON, and deserializes the response back. The JsonSchemaHelper supports primitive types, strings, enums, temporals, collections, arrays, maps, nested types, and Optional fields. You can if necessary also take manual control:
JsonObject schema = JsonSchemaHelper.buildJsonSchema(ProductReview.class);
String json = service.chat("Analyze this review: " + reviewText,
ChatOptions.newBuilder().jsonSchema(schema).build());
ProductReview review = JsonSchemaHelper.fromJson(json, ProductReview.class);
The content moderation internals were also refactored to use structured outputs, making ModerationResult parsing more robust across providers.
File Attachments
Chat input now supports attaching any file: images, PDFs, Word documents, audio, and more:
byte[] document = Files.readAllBytes(Path.of("report.pdf"));
byte[] image = Files.readAllBytes(Path.of("chart.png"));
ChatInput input = ChatInput.newBuilder()
.message("Compare these files")
.attach(document, image)
.build();
String response = service.chat(input);
The in M2 introduced ChatInput.Builder#images(byte[]...) method is replaced by the more general attach(byte[]...) method that handles any file type the provider supports.
Conversation Memory
Multi-turn conversations are now a first-class feature. Enable memory on ChatOptions and OmniHai tracks the full conversation history for you:
ChatOptions options = ChatOptions.newBuilder()
.systemPrompt("You are a helpful assistant.")
.withMemory()
.build();
String response1 = service.chat("My name is Bob.", options);
String response2 = service.chat("What is my name?", options); // AI remembers: "Bob"
// Access conversation history programmatically
List<Message> history = options.getHistory();
The key design decision here is that history lives in ChatOptions, not in the service. There is no server-side session state, no memory leaks, no lifecycle management. The caller owns the conversation. This aligns with the library's philosophy of being a utility, not a framework.
Proofreading
A small but useful addition: AI-powered grammar and spelling correction:
String corrected = service.proofread(text);
The AIService#proofread(String) uses a deterministic temperature to ensure consistent, reliable corrections while preserving the original meaning, tone, and style. Of course also available as proofreadAsync(String).
MicroProfile Config Support
Alongside the existing Jakarta EL expressions (#{...}${...}@AI qualifier now also resolves MicroProfile Config expressions ${config:...}
@Inject
@AI(provider = AIProvider.OPENAI, apiKey = "${config:openai.api-key}")
private AIService gpt;
This makes OmniHai a natural fit not only for Jakarta EE runtimes, but also for MicroProfile runtimes such as Quarkus. On MicroProfile, secrets can live in microprofile-config.properties, environment variables, or any custom ConfigSource.
Under the Hood
Beyond the headline features, the 1.0 release includes:
DefaultAITextHandlerandDefaultAIImageHandlerreplacing the previous abstract base classes, reducing boilerplate for custom providers- Improved
Attachmentmodel decoupled from OpenAI-specific assumptions - Comprehensive package-info Javadoc for all packages
- Extensive unit tests (total 472, many generated with help of my assistant Claude Code) covering models, helpers, MIME detection, and expression resolvers
- More integration tests (total 165), covering all text and image handling features of all 10 AI providers
- Bug fixes and hardening based on those tests
Size
The library grew from about 70 KB in M1 to about 110 KB in M2 to about 155 KB in 1.0 final. Still at least 35x smaller than LangChain4J per provider module. The dependency story remains the same: only Jakarta JSON-P is required; CDI, EL, and MP Config are optional.
The Road Here
Three releases in roughly a month. The M1 established the core API with 8 providers. The M2 added chat streaming and custom handlers. This final release fills the remaining gaps for a production-ready library: structured outputs for type-safe responses, file attachments for multi-modal input, conversation memory for multi-turn interactions, and MicroProfile compatibility.
OmniHai is a sharp chef's knife, it does a few things very well. If you need RAG pipelines, agent frameworks, or vector stores, look at LangChain4J or Spring AI. If you need multi-provider chat, text analysis, and content moderation in Jakarta EE or MicroProfile with minimal dependencies, OmniHai is arguably the better choice.

No comments:
Post a Comment