Interacting with AI providers in Java typically means choosing between heavyweight frameworks or writing repetitive boilerplate for each provider's API. After playing around with the idea for months I'm proud to finally introduce OmniAI as a third option: a lightweight utility library that provides a unified API across multiple providers with minimal dependencies (just JSON-P and optionally CDI+EL).
The major goal is to simplify interacting with a range of AI providers by providing a single, consistent API.
@Inject
@AI(apiKey = "#{config.openaiApiKey}")
private AIService gpt;
public void chat() {
// Synchronous
String response = gpt.chat("Hello!");
// Asynchronous
CompletableFuture<String> futureResponse = gpt.chatAsync("Hello!");
// ...
}
As you see, you can simply inject and configure it using the @AI qualifier. The AIProvider offers several AI providers each having their own AIService implementation. Currently: OpenAI GPT (default), Anthropic Claude, Google AI, xAI Grok, Meta Llama, Azure OpenAI, OpenRouter and Ollama (plus a Custom option). The AIService class provides a couple of handy utility methods to interact with the AI. The initial version has the following methods:
chat- just plain chat, optionally with a prompt/context (e.g. "Helpful Jakarta EE specialist")summarize- summarize a long text with max amount of wordsextractKeyPoints- extract specified number of key points from a long textdetectLanguage- detect language of the given text as ISO 639-1 language codetranslate- translate text from source language to target language while preserving any markup and placeholdersmoderateContent- moderates text to detect violations by categoryanalyzeImage- analyzes an image and generates a description based on a promptgenerateAltText- generate alt text of an image suitable for accessibility purposesgenerateImage- generates an image based on a prompt
Each method has also an async variant available returning a CompletableFuture.
Here are a couple more @AI configuration examples:
// With hardcoded key (bad idea but ok for local testing ;) )
@Inject
@AI(provider = ANTHROPIC, apiKey = "sk-ant-api01-A1Bcdefg2HiJkL3-MNOpqrS45TUvW67x8yZ")
private AIService claude;
// With EL for dynamic configuration
@Inject
@AI(provider = OPENAI, apiKey = "#{initParam['com.example.OPENAI_KEY']}")
private AIService gpt;
// With default provider (OpenAI) and custom system prompt
@Inject
@AI(apiKey = "#{keys.openai}", prompt = "You are a helpful assistant specialized in Jakarta EE.")
private AIService jakartaExpert;
// With different model than default
@Inject
@AI(provider = XAI, apiKey = "#{keys.xai}", model = "grok-2-image-1212")
private AIService imageGenerator;
// With custom endpoint
@Inject
@AI(provider = OLLAMA, apiKey = "#{keys.ollama}", endpoint = "http://localhost:12345")
private AIService localAi;
Without CDI here's how you would do it:
// Provider and key
AIService claude = AIConfig.of(ANTHROPIC, yourAnthropicApiKey).createService();
// Custom model
AIService imageGenerator = AIConfig.of(XAI, yourXaiApiKey, "grok-2-image-1212").createService();
Installation
The first milestone is already available at Maven:
<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omniai</artifactId>
<version>1.0-M1</version>
</dependency>
It only requires Jakarta JSON-P and optionally Jakarta CDI and Jakarta EL as dependencies, which are readily available on any Jakarta EE compatible runtime.
In case you're using a non-Jakarta EE runtime, such as Tomcat, you'll have to manually install JSON-P and CDI implementations.
Demo
Here's a Jakarta Faces based "Chat with AI!" demo.
The session scoped backing bean:
src/main/java/com/example/Chat.java
package com.example;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import jakarta.enterprise.context.SessionScoped;
import jakarta.faces.push.Push;
import jakarta.faces.push.PushContext;
import jakarta.inject.Inject;
import jakarta.inject.Named;
import jakarta.json.Json;
import org.omnifaces.ai.AIService;
import org.omnifaces.ai.cdi.AI;
@Named
@SessionScoped
public class Chat implements Serializable {
private static final long serialVersionUID = 1L;
public record Message(Type type, String content) implements Serializable {
public enum Type {
sent, received;
}
public String toJson() {
return Json.createObjectBuilder().add("type", type.name()).add("content", content).build().toString();
}
};
@Inject @AI(apiKey = "your-openai-api-key") // Get a free one here: https://platform.openai.com/api-keys
private AIService ai;
@Inject @Push
private PushContext push;
private String message;
private List<Message> messages = new ArrayList<>();
public void send() {
var sent = new Message(Message.Type.sent, message);
messages.add(sent);
push.send(sent.toJson());
ai.chatAsync(message).thenAccept(response -> {
var received = new Message(Message.Type.received, response);
messages.add(received);
push.send(received.toJson());
});
message = null;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public List<Message> getMessages() {
return messages;
}
}
The simple XHTML:
src/main/webapp/chat.xhtml
<!DOCTYPE html>
<html lang="en"
xmlns:f="jakarta.faces.core"
xmlns:h="jakarta.faces.html"
xmlns:ui="jakarta.faces.facelets"
xmlns:pt="jakarta.faces.passthrough"
>
<h:head>
<title>Chat with AI!</title>
<h:outputStylesheet name="chat.css" />
<h:outputScript name="chat.js" />
</h:head>
<h:body>
<h:panelGroup id="chat" layout="block">
<ui:repeat value="#{chat.messages}" var="message">
<div class="message #{message.type()}">#{message.content()}</div>
</ui:repeat>
<script>chat.scrollToBottom();</script>
</h:panelGroup>
<h:form id="form">
<h:inputTextarea id="message" value="#{chat.message}" required="true" pt:placeholder="Ask anything" pt:autofocus="true" />
<h:commandButton id="send" value="Send" action="#{chat.send}">
<f:ajax execute="@form" render="message" onevent="chat.onsend" />
</h:commandButton>
</h:form>
<h:form id="push">
<f:websocket channel="push" onmessage="chat.onmessage" />
</h:form>
</h:body>
</html>
The quick'n'dirty CSS:
src/main/webapp/resources/chat.css
body {
font-family: sans-serif;
width: 500px;
margin: 0 auto;
}
#form {
position: absolute; bottom: 0;
display: flex; gap: 1em;
width: calc(500px - 1em);
margin: 1em 0; padding: 1em;
border-radius: 1em; box-shadow: 0 0 1em 0 #aaa;
}
#form textarea {
flex: 1;
height: 4em;
padding: .75em;
resize: none;
}
#form textarea, #form input {
border: 1px solid #ccc; border-radius: .75em;
}
#chat {
display: flex; flex-direction: column; gap: 1em;
max-height: calc(100vh - 10em); overflow: auto;
padding: 1em;
}
#chat .message {
width: 300px;
padding: 1em;
border-radius: 1em;
white-space: pre-wrap;
}
#chat .message.sent {
align-self: flex-end;
border: 1px solid #aca;
}
#chat .message.received {
border: 1px solid #aac;
}
#chat .progress {
min-height: 1em;
}
#chat .progress::after {
content: "";
animation: dots 1.5s steps(4, end) infinite;
}
@keyframes dots {
0% { content: ""; }
25% { content: "."; }
50% { content: ".."; }
75% { content: "..."; }
}
The jQuery-less JS:
src/main/webapp/resources/chat.js
window.chat = {
onmessage: (json) => {
chat.hideProgress();
const message = JSON.parse(json);
document.getElementById("chat").insertAdjacentHTML("beforeend", `<div class="message ${message.type}">${message.content}</div>`);
chat.scrollToBottom();
if (message.type == "sent") {
chat.showProgress();
}
},
scrollToBottom: () => {
const chat = document.getElementById("chat");
chat.scrollTo({
top: chat.scrollHeight,
behavior: "smooth"
});
},
onsend: (event) => {
if (event.status == "success") {
document.getElementById("form:message").focus();
}
},
showProgress: () => {
if (!document.querySelector(".progress")) {
document.getElementById("chat").insertAdjacentHTML("beforeend", '<div class="progress"></div>');
chat.scrollToBottom();
}
},
hideProgress: () => {
document.querySelectorAll(".progress").forEach(el => el.remove());
}
};
Don't forget to create a (empty) src/main/webapp/WEB-INF/beans.xml file and enable websocket endpoint in web.xml:
src/main/webapp/WEB-INF/web.xml
<context-param>
<param-name>jakarta.faces.ENABLE_WEBSOCKET_ENDPOINT</param-name>
<param-value>true</param-value>
</context-param>
Here's what it looks like (xAI was used in this convo, hence real-time information):
Was AI involved in development?
Absolutely! I used Claude Code as consultant and assistant in terminal. It's a great tool for consulting and generating boilerplate code, especially filling in the Javadocs and creating unit tests. It has almost completely replaced Google and Stack Overflow for me with regard to investigating. Claude came up with real world use cases to kick off the AIService. Claude generated example implementations of OpenAI, Anthropic, Azure, Bedrock and Ollama. I threw away the Bedrock (AWS) one because it was a horrible mess full of AWS-specific stuff which I couldn't really verify/test (you better pick the AWS SDK for that and I didn't like having that as a dependency).
I had to refactor a lot of duplicate code in these example implementations and pull a new BaseAIService out of it (true, I could have instructed Claude beforehand to please respect the DRY and KISS principles). Oh yes I also asked Claude to please replace all these hardcoded JSON string templates and hardcoded JSON parsing (!!) by clean JSON-P code. Then I architected the AIProvider, AIConfig, @AI and all these helpful exception subclasses. And I added a couple more providers and implementations, most of which just extended from OpenAIService. Only the Google one is new. Apart from that, I used ChatGPT to review the OpenAIService implementation and Gemini to review the GoogleAIService implementation, both were also helpful in validating the impl. I started this project 5 days ago, by the way.
I estimate about 20% of the OmniAI code is raw AI generated, the rest is polished/refactored/refined/expanded. Javadocs are for about 70% generated and for the rest rewritten/clarified. The project's README at GitHub was for about 90% generated. I explicitly asked Claude for the comparisons with LangChain4J, Spring AI and Jakarta Agentic, and they are very spot on! In short: the OmniAI is intentionally minimal, a sharp chef's knife rather than a full kitchen. It's ideal when you need multi-provider chat, text analysis, or content moderation on Jakarta EE without the dependency overhead.
Give it a spin and let me know what you think!
As you might have noticed, it's released as a M1 version (milestone/beta). That's because I haven't been able to fully test all AI providers yet, especially image generation (which still isn't free these days). Only the Claude, xAI and OpenRouter ones are 100% tested (including image generation with xAI). OpenAI and Google only chat, moderation and image analysis are tested, so image generation is pending. Azure and Meta are not yet tested (but they basically extend from OpenAI which is already tested). Ollama, I'll look closer at it this week.
If you run into any issues or have ideas for improvements, feel free to open an issue on GitHub. Pull requests are of course also welcome, whether it's a bug fix, a new provider implementation, or just clearer documentation.
In the long run I'll of course keep enhancing it. For example, I have in mind to pull the text analysis, text translation, content moderation, etc strategies from the BaseAIService so that they are easier individually decoratable via e.g. @AI(..., textAnalyzer = MyTextAnalyzer.class). Just in case you wish to tweak only a small part of the impl rather than extending from a whole AIService impl. Also I'd like to add a @AI(serviceClass = MyAIServiceImpl.class, ...) as alternative to AIProvider.

