Loading video player...
This mug I've got here, it just holds
coffee. But have you ever seen those
mugs where when you pour in a hot
liquid, it changes color to indicate
that there's something hot inside? It's
smart, adaptive, and intelligent. In the
age of AI, we have grown to see adaptive
intelligence everywhere. It's
increasingly more important than ever
that we are able to understand how we
can create applications that integrate
AI. Hello everyone. I'm Ian. I'm a cloud
advocate here at Microsoft. And joining
me today is Julian. Julian is going to
be talking about and showing us how Lang
Chain forj brings some of that
intelligence to Java applications.
Julian, so excited for this session.
Over to you.
>> Thank you so much for this introduction.
like you. I do believe that Java is a
great platform to build AI applications
today with some great tooling that is
already available like launching 4J or
Spring AI. I'm Julian Dir. I'm working
with Ayan in the Java developer advocacy
team at Microsoft and I'm also one of
the core contributors to launch 4J where
I implemented the official OpenAI Java
SDK integration. That's what we're going
to use in this video today. To do that,
I've worked both with the PI team and
the launch 4J team and we're going to
see how easy it is to use both tools.
Today, we're going to do a small demo in
four parts. So, we're going to set
everything up. We're going to configure
longchain 4J. We're going to run it and
we're going to test it. At the end of
this video, you should have a good
understanding of how long 4J works. So,
you'll be ready for the next video where
we'll do a little bit more complicated
but also more interesting. So let's get
started. And when I want to do a very
simple Java project, usually I go to
start.spring.io. That's what we're going
to do right now. So here it is. I've
just selected Maven because I want to
add long 4J and I want to show you how
to add some dependencies using Maven as
it's the most commonly used tool for
dependency management. And I've selected
Java 24 because I like to have the
latest version of everything. I'm not
adding any dependencies uh because we're
going to do something extremely simple.
So, we don't need anything yet. I'm
created the project. I've downloaded it.
Let's open it up. Here it is. I'm
opening it with IntelliG. It will work
the same with any IDE like VS Code, of
course. Let's run the project to see if
everything is fine. Here it is. Again,
it's extremely simple and it's not going
to do anything. It's just going to run
the Java application and stop because
there's nothing to do. Let's add uh
something a little bit more interesting
for that. I'm going to use GitHub
copilot. Let's go to agent mode. Let's
use clut because I find it better. And
let's ask it add the spring uh command
line runner to ask a question to the
user.
Of course, I could have coded it myself,
but it's much faster to ask uh GitHub
copilot to code it for me. So this is
going to update uh my spring boot
project and add some simple Java code to
ask uh some information uh to the end
user. Here it is. Let's accept this. So
let's run it again and let's see how it
works.
Now it's asking me a question. What is
your name? So my name is Julia. And it's
saying hello Julia. So we've got a
question and an answer. We're not using
AI yet. So it's very basic, very simple.
Uh if we want to do something a little
bit more interesting, of course, we want
to add AI to our application. So let's
get started and let's add longchain 4J.
For that, I'm going to go to the main
long 4J documentation. You can do the
same here, of course. Uh the advice here
is to add the dependency that you need
for your application. I'm going to do
something a little bit more complex. If
we go down here, we're going to use a
bill of materials. So that's a Maven
configuration. Uh the interesting thing
here is that longchain 4J is separated
in many different modules. You will
probably want more than one. But for
something as easy as today, probably you
only want one, but that example is
probably too simple. If you want
something realistic, you will want
several modules. So you want dependency
management here. So all your modules
have this right dependencies
automatically coming from this bill of
material. I'm going to add it in my
pond.xml
right here and I'm going to add uh our
dependencies just above. So we've got
integration in 4G with many large models
for example GA models mistral
etc etc. I want to use openAI official
SDK. So you've got also an unofficial
SDK which might work better if you use
Quark or Spring because they're using
the underlying HTTP client, but I'd
rather use official one from OpenAI
because you've got the latest version of
everything which I find is better in the
long term. Uh the dependency we need to
use is this one. Let's copy paste it.
And as we just use the bill of material,
we don't need to add the version. That's
why I wanted to do that earlier. It's a
lot easier now to use. So, longchain 4J
is integrated into my project. I'm just
forcing Maven to reload it to be sure
that everything is fine. And now I can
start to configure it and then of course
use it. Let's go back to the
configuration to the documentation here.
Uh here is how it is supposed to be
configured. So let's copy paste this and
have a look at how it works. Uh I'm
going to configure it right here. So
we're going to use a chat model. So
that's an interface. Let me import it.
The chat model comes from longchain 4J.
That's an interface. So all
implementations will use the same
interface. That's why it's interesting
to use longchain 4G. One one of the main
reasons to use it. You can change
implementations very easily as you will
only rely on the interface for your
coding purposes. Uh, so I've got that
interface that allows me to chat with
any LLM. Then I need an implementation.
In this case, we're going to use the
official OpenAI SDK implementation. I'm
going to import it. Let's just have a
look. As you can see, it's a bit more
complex. It's a real implementation that
connects to OpenAI and and gets uh the
answers back and and passes everything.
So, it's quite complex and as we can see
uh it requires three parameters. There
are a bit more parameters if you want
to, but there are three main ones. The
first one is the URL, then the key, and
then the model that you want to use.
Let's get those parameters and configure
them. For that, I'm going to ai.
My Azure AI foundry instance. As you can
see, I've got several models which are
already deployed. I'm going to I'm going
to use GPT5 Mini.
Uh there is some documentation here to
help you typically with Java. Here it
is. uh you've got different SDK like
here the OpenI SDK that we are using
that's what we are using underlying uh
launch for
so what we want is uh uh the first thing
is the URL so we're going to copy that
and we need only the B URL as the name
suggests here so let me copy this and
only use a BRL
which is this one.
The second thing we need is a key. So
the key is here. Of course, in a real
application, you shouldn't add code the
key here. I'm only doing this for the
demonstration.
And I will rotate my key just
afterwards, so it's useless. And the
last thing that we want to use is the
model name. So we're using GPT
GPT 5 mini.
So we've got also some u um constants to
use that but we can just type it. It's
extremely easy. So with that
configuration my model is able to access
uh GPT5 on Azure and we're going to be
able to query it and ask it some
questions. Uh let's do something here of
course with the answer. So the question
what what what is your name? So, we're
going to say, uh,
please, uh, write a nice
poem for a person called,
and here's a name. So, that's what you
will typically call a prompt when you
use AI. And we're going to send that
prompt to our model. So, we're going to
do model
chat.
And that's the prompt that will send to
the chat.
The answer to that chat is going to be a
string which is the answer from the LLM.
Let's run this again.
So it's asking me my name again. Let's
put something a little bit more fun. My
name is Java.
And let's see if we can have a nice poem
about Java from GPT5 Mini. Here it is.
Java, you arrive like morning warm and
steady. Blah blah blah. It's talking
about coffee, of course, because Java
also in in English means coffee. So,
here's how you can add easily support
for AI to your application. That's only
generative AI with text. If you want to
use images or audio, it's basically the
same, just not the same implementation,
but it's basically the same thing. If
you want to use other LLMs, it's also
the same uh idea. you just change the
implementation and you've got here an
easy to use interface to query it and
get the answers. In the next video,
we're going to do something a little bit
more complex by doing a will be
different LLMs talking together and
working together for doing something
more complex than just what we've seen
here. So, see you at the next video.
Thank you.
>> Hey Julian, thank you so much for
showing us how we can easily integrate
AI into our own applications. If you
also want to learn and take your first
steps to integrate AI, you can go to
aka.msjava
and AI for beginners to find resources.
It's also linked in the description of
this video. We will see you in the next
episode.
In this episode, Ayan Gupta is joined by Julien Dubois from Microsoft's Java Developer Advocacy team and a core contributor to LangChain4j, who demonstrates how to build intelligent AI applications with Java. Just like a smart coffee mug that changes color when it senses heat, modern applications need adaptive intelligence, and Julien shows you exactly how to add it. This is the first of two sessions focused on building intelligent AI apps. Julien walks you through the fundamentals of integrating LangChain4j with the official OpenAI Java SDK, showing you how to transform a simple Spring Boot application into an AI-powered poem generator. You'll learn why LangChain4j is the framework of choice for Java AI development, its unified interface means you can easily swap between different LLM providers without rewriting your code. Julien demonstrates how to add LangChain4j dependencies using Maven's bill of materials (BOM) for clean dependency management across multiple modules.You'll configure your first chat model using Azure AI Foundry, connecting to GPT-4o Mini through the official OpenAI SDK implementation. Julien explains the three key parameters you need: the base URL, your API key, and the model name. The beauty of LangChain4j shines through as you see how the same ChatModel interface works across any LLM implementation, whether you're using OpenAI, Mistral, Llama, or GitHub Models. Resources: aka.ms/JavaAndAIForBeginners 0:00 - Introduction: Adding Intelligence to Applications 0:46 - What Is LangChain4j and Why Use It? 1:18 - Creating a New Project with start.spring.io 2:12 - Setting Up Maven with Java 24 2:32 - Running the Basic Application 2:52 - Using GitHub Copilot to Add Command Line Runner 3:29 - Testing User Input Without AI 3:47 - Adding LangChain4j to the Project 4:28 - Understanding Maven Bill of Materials 5:17 - Choosing the OpenAI SDK Integration 5:49 - Configuring the Chat Model 6:27 - Connecting to Azure AI Foundry 7:11 - Getting API Keys and Endpoints 8:27 - Setting Model Parameters 8:47 - Creating the Poem Prompt 9:25 - Testing the AI Integration 10:06 - Seeing the Generated Poem 10:30 - Session Recap and Next Steps #LangChain4j #JavaAI #OpenAI #IntelligentApps #SpringBoot #AzureAI #JavaDevelopment #AIIntegration #ChatModels #AIApplications #Java24