Loading video player...
All right, let's make a start.
We've got a relatively large amount to cover.
So hi everybody.
Thanks for for joining the session today.
My name's Charles Ferguson.
I lead the product management team for Postgres at Microsoft
and I'm joined by Motion who will come on stage
shortly from NASDAQ to talk about how they've built their
board Vantage solution on a variety of Azure services.
So I was going to kick us off here.
It's a relatively short agenda.
Their motion's going to expand upon that in a little
bit more depth when he gets here.
But the basic content of the session today is going
to cover everything that you can fundamentally see on on
the screen here.
So NASDAQ are deploying these solutions using two of our
open source databases in in Azure today, Postgres and MySQL,
but then complementing that with a number of other key
services, most notably Foundry, where they're leveraging their AI models.
And Motion's going to walk you through, you know, how
they're using AI in the board Vantage solution for doing
a lot of work on documents and text to bring
a lot of efficiencies and optimization.
We'll talk about the models, talk about document intelligence and
how they implemented that over a previous way.
They were doing things to find a lot of efficiencies
and improvements in quality as well.
We'll talk a little bit about API management and then
Kubernetes, which is the the underlying application infrastructure for for
all of it.
Before we get into that, however, I just want to
give a few minutes here to talk about what we've
done specifically in Postgres at Microsoft over the last couple
of over the last couple of years.
Many people are often unaware actually that Microsoft is one
of the largest committers to the open source Postgres project.
And you know, Postgres has found itself in this position,
particularly over the last couple of years.
And I think you know, by virtue of the fact
that when the ChatGPT tidal wave broke, Postgres was one
of the only databases that have had a native vector
index directly within the database at that time.
And if you take a look at the popularity of
PG vector, it basically hockey sticked upwards from that point
in time, February about 2023, it was already had strong
developer preference and we've seen that continue to grow.
But amongst the hyperscalers, Microsoft has been the largest committer
over the last three years and it's part of really
how we approach working in the Postgres ecosystem.
We build and ship Postgres services.
We ship Azure Database for Postgres and we've announced Horizon
DB at Ignite as well.
But part of being a responsible member of that community
is taking what we learn from open source software, running
it at scale in the cloud and committing it back
upstream into the core open source project.
And that is exactly what we do.
It is a virtuous loop that just continues around.
It makes Postgres better for for everybody and it's how
we participate in that in that community.
Now at Ignite, we've had a couple of announcements specific
to flexible server.
I'm not going to drain the entire list here, but
two things that I really want to call out is
how we've lit up the latest generation of skews directly
into the flexible server service.
We've basically doubled the maximum number of E course that
are available for you.
We've made improvements to our storage as well, which are
not mentioned here as well, but that's enabling high availability
on a more performant version of SSD.
The other one that I just want to talk about
is in the lower left hand corner here, the post
Chris extension for VS Code.
So we shipped this actually, we announced this actually at
Pycon just before build and then re announced it at
build earlier this year.
I think we're close to about 280,000 installations right now.
We've seen a ton of positive feedback, a lot of
interaction with that community who have asked us to fix
little things.
We've been able to turn fixes incredibly quickly and we're
seeing really strong adoption.
And our vision with this extension in Postgres is to
offer effectively the full life cycle from provisioning to managing
to developing, leveraging Copilot for more productive, productive development and
then also debugging, monitoring and then migration as well.
And you've seen a lot more of that come to
light at Ignite this year in other sessions.
But if you do use VS Code and you're interested,
go take a look for the VS Code extension from,
from Microsoft.
It's got about, as I said, 270, two 180,000 installations
today and you should be able to to spot that
one.
Now the big announcement that we made Ignite this year
was around Horizon DB.
Horizon to be is our new cloud native Postgres service
where we've basically built a new storage capability that is
zonal resilient by default.
There was a great session given earlier this morning by
Denzel and Adam that went into a lot of depth
on how this works technically.
But basically, it's designed to deliver much higher transactional performance
throughput, much higher vector search performance with a full zonal
resiliency configuration so that you can run these mission critical
workloads with the highest levels of availability.
And then very simple failovers that sort of make it
a little bit more, improve the availability, reduce the downtime
relative to perhaps self managed Postgres as well.
So this is in private preview right now.
There is a sign up form that you can get
started on.
I think we've got a slide for that towards the
end of the session.
But really what I want to do is hand it
over to Mohsen.
So before he comes up here, we've got a short
video we're going to play and then I'm going to
give it to Mohsen for the rest of the sessions
to walk you through how they built board Vantage on
all of this Azure technology.
So thank you.
Where's my video and hit play?
This milestone marks the culmination of our future of the
boardroom floor, which is a celebration of bold ideas, transformative
technologies, and strategic partnerships that are shaping the future of
governance.
It's been such a thrill to be building AI driven
products with NASDAQ.
AI changes the way we work, it changes the way
we live, and now it's reimagining the way that we
can make the most important decisions at the highest level.
We are reimagining how boards operate, empowering boards to lead
with clarity and confidence in a world that demands agility,
insight, and precision.
Thank you very much Charles.
It is really fantastic to be here.
Hello everyone.
My name is Mohsen.
I am a senior manager of engineering at NASDAQ and
I work with the NASDAQ board Vantage platform.
I'm here today to talk about our AI driven governance
on Azure Foundry and postcard Sequel.
But before we go there, I'd like to walk you
through the agenda so you kind of know what you're
in for.
The picture Microsoft has used here is from a few
years ago.
A lot of things have changed since then.
One thing that has really changed is how we do
governance in in 2025.
That's exactly what we're going to be talking about.
We will talk about Nasdaq's cloud journey and some of
the challenges that we've had with that and how Azure
helped us to to solve those challenges.
We will then talk about enabling AI on our platforms
and some of the features that we've put into the
hands of our customers over the last two to three
years.
We will be doing a technical deep dive and I'll
be demoing a couple of features for you guys.
As a tech guy, I know sometimes in events like
these, the demo gods are not very kind to us.
So in case something goes wrong, Fred, not we have
backup videos, so you'll be switching to them.
But hopefully everything should be fine.
And then we'll be doing some architectural deep deep dive
to explain some of the architectural decisions we made.
And then eventually, if there's any learnings and evolution on
our platforms, we'll be talking about that as well.
So why does governance matter in 2025?
Governance is no longer about oversight.
It's actually about foresight because we are living in a
technical revolution that's being driven by AI.
The decisions need to be made really fast because stakes
are really, really high.
Governance is what is keeping leadership aligned, agile and accountable.
And now at NASDAQ we want to solve a problem.
That problem is where traditional ways of working are becoming
less and less relevant, where decisions have to be faster,
smarter, and be made with greater transparency.
This is where NASDAQ Board Vantage steps in and really
shines.
With our AI driven workflows like smart document summarization, AI
generated meeting minutes, contextual insights into your upcoming meetings.
We are transforming governance from a manual reactive process to
a proactive strategic advantage.
I'm happy to to say that since we've released our
AI features to production, our customers report more than 25%
of time savings annually.
That is more than 100 plus hours saved on manual
workflows like executive summaries of board documents and manually creating
meeting minutes.
So that's 100 plus hours saved.
It's it's, it's fantastic savings for for our customers.
However, it started with something slightly different.
It started with our move to the cloud.
Now I know this session is is mainly about our
AI foundry and postcard sequel in our AI workloads, but
I think it's really important to understand how we did
what we did and the role that our partnership with
Microsoft and particularly Microsoft Azure played in, in driving our
innovation forward.
Today, NASDAQ Board Vantage serves more than 4500 companies worldwide.
We are serving almost half of Fortune 500 organizations.
It is really important for us to speak the same
language that our customers do as most of our customers
are already in the cloud or are on their way
there.
As I'm sure is the case with all the organisations
represented in this room, you are in some way, shape
or form towards your journey in the cloud.
So when we are onboarding our customers, Azure really simplifies
that onboarding process because we can speak the same language
when it comes to cloud standards such as data privacy,
security, cross region deployments and everything that comes with the
with being on the cloud.
Now, NASDAQ is also a, like I mentioned, a global
platform, which means we operate in different parts of the
world.
And that while amazing, means that we have to comply
with regulations pertaining to data residency and data processing in
different parts of the world.
When we were in the data centre, we were managing
this by, you know, having data centres in different regions.
So being on Azure meant that that had to, to,
to meet that requirement as well.
And now you might be thinking, what is this guy
talking about?
Of course, being on the cloud means that we will
be meeting all these requirements, but let me explain.
NASDAQ Board Vantage today is made-up of many moving parts
from Azure Kubernetes Services to multiple databases to to BLOB
storages in Azure key walls.
And now with Azure AI Foundry where we are deploying
different models, all of these components have to follow that
exact same standard when it comes to a regional data
residency and and processing.
So for example, if I'm going to deploy a 4.1
mini model in Australia E, it has to be processing
it's data in Australia E for that customer.
And again I would I'm happy to say that Azure
has has really delivered in, in that aspect.
NASDAQ board Vantage is also a multi tenanted platform.
What that means is that for each of our 4600
customers or actually more than 4600 customers, we are segregating
them based on on databases and file storages.
And then on top of that, we are providing them
a extra layer of security by encrypting each customer's data
with their own key.
Now when we wanted to move to the cloud, it
had to to to satisfy these primary requirements of our
backbone architecture.
And I'm happy to say again that moving to Azure,
we found a one to one fit for pretty much
everything that we had.
So for example, when we were migrating our database, the
Azure database for my sequel server was a perfect fit
for our multi tenant system.
We we were we could migrate everything 1:00 to 1:00.
And then on top of that, we got high availability,
auto scaling, auto grow and we didn't have to worry
about managing our own infrastructure over there.
Then when it came to customer security keys, we were
using HSMS in the data centre.
Moving to the cloud, Azure offers AKV which is doing
exactly the same thing, but again, now we don't have
to worry about high availability and everything that comes with
maintaining the the hardware.
And then for for for file storages.
The Azure BLOB storage was a perfect fit as well
because now we could, in the BLOB store, store each
customer's data based on their own storage accounts.
But I would say the greatest benefit that we get
by being on the cloud is our engineers can focus
on writing and delivering business value without having to worry
about maintaining our own infrastructures.
I don't have to get into those late night calls
anymore to worry about oh we need to patch this
server or a hardware module fail.
Sometimes I still get called into those calls, but those
are usually because of the bugs I've introduced into the
system.
But I would say now with the Azure AI Foundry,
we have access to all the AI services as well,
which has really helped to drive our innovation forward and
help to stay relevant in this ever changing technological landscape.
I'm not going to bore you by explaining each part
of this document.
This is just to explain the gravity of of our
infrastructure.
You can see we have many, many moving parts and
components from Azure Kubernetes services to multiple databases, event driven
architecture.
Now AI, foundry, BLOB storage.
And obviously we operate in a high available, highly available
environment.
So each of these regions have a primary and a
secondary and all of these are replicated across all the
the the regions that we operate in.
So you can see it's quite an extensive architecture that
Microsoft Azure is helping us to to maintain, right.
Let's pivot.
As soon as we had moved to the cloud, I
think this was sometime in mid 2022.
Later that year we all know chat GPD came and
it changed the way that we are going to be
engineering permanently.
In my opinion now NASDAQ being the transformative company that
it is recognized the transformative potential of this technology and
we got to work.
But then again, we didn't want to just, you know,
go to market with the first feature that we could
have just for the sakes of being the first one.
So NASDAQ poured a lot of energy, time, resources and
a few dollars to analyse what would make the most
sense for our customers, our product people, our business.
They spoke to our customers, identified the the business flows
that would make the most sense.
And this is where we decided that smart document summarization
and the AI meeting minutes would be the the best
strategic advantage for our customers.
And they've been living in production for, for quite some
time.
And the next step that we wanted to focus on
was driving customer adoption.
I, I'm happy to say that, you know, out of
more than 4500 customers worldwide, more than 1000 have enabled
AI on their platforms already.
And you might be thinking that's less than 30%.
You guys need to do better.
But let me explain.
We have, we operate in a very sensitive segment of
the market.
Our customers trust us with some of their most strategic
sensitive data, data that is used to drive business decisions.
When AI was becoming mainstream, there was a lot of
challenges that we had to go through.
We not only had to solve those challenges, but we
also had to build that confidence and pass it down
to our customers.
So that's exactly where Azure and PostgreSQL has been helping
us.
But before I go there, I'd like to talk about
some of those challenges that we that we faced and
we have solved.
AI is non deterministic.
Now as engineers, we have worked with non determinism before.
We have dealt with large distributed systems where certain risk
conditions do not execute in the order that you expect
them to, but they are usually edge cases and they
can be mitigated with a significant amount of effort.
AI is inherently non deterministic.
That's just how it is.
We have to accept that, work with that and work
around that.
I don't need to impress upon anybody how bad hallucinations
can be, but in our line of work, we are
customers are again using data to drive decisions.
Hallucination can be catastrophic for both us and our our
customers.
So that's also something we have to understand and work
with.
Now, the medium that we use to communicate with the
AI today is natural language and prompt engineering.
And we know that natural language is ambiguous.
Again, as tech people, we work with programming languages.
Programming languages are built to remove ambiguity, unless it's JavaScript,
then it's just ambiguity.
But I digress.
The point I'm trying to make is that natural language
is inherently weak when it comes to removing ambiguity.
Also something we have to to understand and work with.
Now AI has come a long way since 2022, but
it is still slow and expensive to deploy at scale.
And organizations like NASDAQ with products like Board Vantage, which
are trying to mainstream that we have to be aware
of that and work with the with the limitations that
we have.
Obviously AI also introduced uses a new way of a
new challenge of testing and validating the outputs.
Again, as programmers, we are used to determinism in our
unit tests, but you can't do a assert equals on
the response from your, your AI output.
So we either have to build new tools or work
with tools that somebody else is, is building to validate
and test AI in this in this changing paradigm.
Well, I think that's enough for the theory.
Now I'd like to talk about some of the, the
AI features and show you some, some demos.
Now since 2022, we have put a few features in
the hands of our customers.
The ones that I'd like to talk about are meeting
minutes and the board assistant.
The meeting minutes is AAI feature that has we, we
deployed it in March earlier this year.
I do want to impress upon everyone that a board
meeting minute is very different from the meeting minutes that
you would see from a regular meeting.
These are legal doc records which can be used in
court proceedings.
These these contain important information about deliberations, how decisions were
made or anything and and basically strategic word exchange.
Corporate secretaries, after the meeting has happened, spend a lot
of time trying to formulate these meeting minutes to ascertain
their accuracy.
In fact, I was on a call with one of
the corporate secretaries a couple of weeks ago doing a
demo for this feature and she told me they spend
more than two weeks to to finalize these meeting minutes.
Our tool can do it in less than 5 minutes.
But before we go any further, I'd like to show
you guys how this tool actually works.
So I'll switch over to a demo.
So there's a few ingredients that goes into to building
these meeting minutes.
You have an agenda document.
I will upload that.
An agenda document, as the name indicates, will basically outline
whatever is going to happen in the meeting and who's
going to talk about that.
It's also going to mention any participants in the meeting,
whether they are joining online or they are joining in
person.
Then you have a meeting material.
This is basically a board packet, and these can be
massive documents, up to 400 to 600 pages.
Sometimes I've seen even more meeting notes.
Basically is the the the note taker in the room,
they will record any word exchanges that happen, voting decisions,
all of those things that are not obviously in the
board packet.
And then we allow you to upload an example of
the previous meeting minutes because we understand that each company
has its own way of writing, formulating their meeting minutes
document.
We don't want to assume that you want in a
certain way.
We let you upload an example and then we use
AI to apply that style and formatting into your your
linguistic formatting, basically into the actual document.
So I'm going to go ahead and attach an example
as well.
And then we'll press generate.
Obviously, we're not going to wait for this because it
can take a while and we don't have a while.
So I have previously generated a document that I'd like
to show you guys.
And you can see that this is a meeting minutes
from a Massachusetts for authority.
Priority meeting and I'm not going to go through the
whole thing, but you can see how the AI formulated
the the entire document.
Let me just make it a bit bigger.
And it will based on the agenda that you've uploaded
is going to Add all the information.
And then in the end, if there's any key decisions,
it will add those as well.
And then basically it's formatted according to the, the example
of the meeting minutes that was provided, right.
So let's talk about what happened here.
I uploaded a few documents, some magic happened and then
you had a draft of the the meeting minutes generated.
Let's try to unravel that magic as I uploaded the
agenda.
We use document intelligence and I'm going to talk about
document intelligence and the importance of that in just a
second.
We extracted all the information from from the agenda and
then we applied the agenda processing prompt.
Obviously, this is a special prompt that parses the agenda
in a in a way that we're going to use
it further.
We process that with 4.1 mini and then we store
that in the in the memory for for the processing.
As a next step, we then extract meeting materials.
And this is where document is just becomes really important
because as I mentioned, these can be documents of up
to 400 to 600 pages.
When we did not have document intelligence, we wrote our
own semantic chunker.
The idea was that you want to create small chunks
of these documents, create embeddings and then store them into
the into the database.
That chunker was doing quite well, but it was always
a guessing game where a section starts, where a section
ends and sometimes it would not be very accurate.
When we started using document intelligence, basically what it does
is it gives you a well defined layout of the
entire document.
So you know where it's the heading, you know where
this a section starts, a section ends.
And it's particularly very good when it comes to recognizing
figures and, and other diagrams because sometimes a lot of
context is missed in those things.
So once we've done that, we still perform the chunking,
but now it's easier to to chunk this this document.
And then we create embeddings with the text embedding ADA
and persist that into the the Postgres sequel.
We will be talking a lot more about the Postgres
sequel with PG vector enabled in just one second.
The exact same thing happens with the with the meeting
notes.
We process that with document intelligence, semantically chunk it, create
embeddings stored into the Postgres SQL database.
And then obviously if you have an example of the
document, we will process that with the document intelligence and
we will apply a style guide prompt to it and
we will keep that on the on the side as
well.
Now that we have processed all the ingredients of this
this massive board context, it's time to start putting it
together.
So how do we put it together?
We take this processed agenda and then we match each
of the headings to the stored embeddings.
And you can see that with Postgre SQL with PG
vector enabled, it is so easy to run a hybrid
search.
What you can see down here is a Java code
snippet with a SQL query that has a vector similarity
search as well.
You don't need to.
There's no rocket science here.
It's just simple SQL with hybrid search.
And then you can do a look up based on
the cosine similarity.
And then you can limit up to the number of
embeddings that you are looking for.
The exact same thing happens now that we've matched the
embeddings, we can apply the prompt because remember, what we've
selected right now is basically chunks of text, right?
So when you apply the, now we need to process
it with natural language so it makes sense to the,
the reader.
So we apply the prompt, process that with the chat
completions.
And now you have the agenda that has been enriched
with the meeting material.
And now the exact same thing happens with the with
the meeting notes, same process, refine it and then add
that to the enriched draft that you had with the
with the meeting material.
And now if you have a style example optional, but
if you wanted, we are going to apply that style,
apply the prompt process that which had completions and you
have a final drafted meeting minutes.
I must impress upon each of you that this tool
is not built to replace the corporate secretary.
It's built to make their lives easier.
So instead of spending two weeks on drafting this manually,
now they spend less than 5 minutes and they can,
you know, be more productive with their time.
And then obviously you want to have the human in
the loop.
That's why you get a Word document downloaded.
The corporate secretary can go back and edit that if
she wants, or he wants to add more information to
it.
Or if anything doesn't make sense, they could change that
as well.
Now with AI meeting minutes, we proved that we could
use PG vector enabled in in Postgres Azure database which
is natively available.
You all can go and do that right now.
We prove that it works for production workloads and it
was working really well for us, for all our customers.
So then the next natural step was OK, how do
we use it in our next feature, which is the
the board assistant.
Now the board assistant is infinitely more complicated than than
the meeting minutes and it will be fun demoing it
to to you guys.
But the idea with the board assistant is that we
want to make life easier for for our users, particularly
our directors.
So if you are a, if you're a director heading
to a meeting, you're probably busy with a lot of
things.
What we want to do is provide you contextual insights
into your upcoming meetings.
So for example, it'll tell you what you should be
preparing for based on whatever has happened in the last
two to three meetings.
And then it gives you a chat assistant that you
can use to, to talk to and get more information.
But let me show you how that works.
I know I have to switch the screen.
Just give me one second, all right.
So imagine that I am a director.
I come into my board portal, I have a meeting
on the 20th of November and then I can see
an AI icon over here.
I click on this guy and then it tells me,
it shows me the different insights that will be that
I should be preparing for my upcoming meeting.
So for example, strategy, risk, finance.
So let's go ahead and click on risk.
It has selected the top 6 insights for Risk that
I should be focusing on in my next upcoming meeting.
So let's say I go ahead and choose cybersecurity because
why not in Signite.
So what it's going to do is it's going to
give me a comparative analysis of what we have talked
about in the previous minutes, previous meetings and what should
we be focusing on, on the upcoming meeting, right?
So for example, I'm not really happy with what it
has told me.
I could say, I could start talking to it and
say, could you expand on Queue 3?
And just like that, it's going to go and look
up all the information stored in the in the back
process that with AI and give me what I should
be focusing on.
You can also see that it gives you follow up
prompts.
So based on the conversation you're having, it will tell
you what other questions you should be asking.
So you can ask that question as well.
Then on top of that, if you want to know
something else about the meeting, for example, let's say who
else is attending, could you tell me who else is
attending?
So it's going to give me information about who else
is attending.
So if I want to avoid somebody, I will not
go to this meeting.
And then obviously you can switch to to the other
insights and you can do the exact same process over
there as well.
So let's switch back and see what has happened.
Now, as I said, our board assistant is much more
complicated.
What's happening here is we are just introducing building this,
this new feature, but we have had customers who've been
adding data to our platform for the last 8 to
10 years.
And this is a lot of data, right?
So if you are, if you have a, a board
packet, as I mentioned, this could be like 400 to
600 pages.
And then multiply that by the amount of board meetings
that you're having.
It's a ton of data that needs to be processed.
So what we did is we built a, the board
assistant in two parts.
1 is your data processing pipeline.
So as a customer enables AI, you're going to go
and we we look up this data process that with
a pipeline.
We'll just get to that in a second and then
store that into the the vector database and also the
my sequel for for non non vector information.
And then the board assistant part is basically the interface
that lets you interact with the assistant.
As you just saw in the demo for the data
processing pipeline, we decided to go with a serverless approach.
The reason for that is because as I said, we
are processing an immense amount of data.
I don't want to worry about CPU, memory limits or
anything like that.
I will get what I need to, to, to, to,
to get to process this.
And I only pay for what I have to pay
for.
And if I'm not using it, I'm not going to
pay for it.
And then I don't want to, to put stress on
our existing infrastructure by processing all this ton of data
with the existing micro services.
It just it's, it's not meant for that kind of
workloads.
And then So what we're doing is we are using
a fan in, fan out approach.
There's an orchestrator that fires up per client.
I'm not going to bore you with all the activities
that happen over here.
But then for each of the meetings, it processes each
of the the, the the steps that are in part
of your meeting as activity functions.
Let's take a look at what happens in one activity
function.
So what happens is, let's say I'm processing the metadata,
so any information about the title, where the meeting is
happening, any other tags that might be important, I take
them, I chunk them, I create the embeddings, we tag
the embeddings so that we can find them later in
a in a easier way.
And then we persist them into the the vectorized database.
The insights generation that you saw is a preprocessed activity.
We have a research agent that runs in the background
that's going to pick up all your documents.
It's going to run a comparative analysis, it's going to
do a, it's going to refine that and it's it's
going to run a comparative analysis again.
And it does that over many passes until it's reached
the state where now you can say, OK, I have
a very good comparative analysis of how a trend in
my insights has changed over the last three to four
meetings.
And then obviously this is not a vector.
It's basically text that will persist that into the My
Sequel database.
So you guys saw me interacting with the with the
assistant.
What happens is I write a prompt, it gets converted
into an embedding, we match that with the existing context
and then basically it fetches all the existing data.
The the reason why I put this piece of snippet
over here is thisis.net Core.
This is again to demonstrate how easy it is to
write a hybrid search query in any platform basically.
So all this is doing is employing EF Core link
and simple vector search and now you can basically limit
this with cosine similarity in any of the semantic rankings
that you want to do as well.
So once we found the the matching embeddings, we will
add the historical context so the AI knows the flow
of information and the context that the chat was being
held in.
We will then apply the prompt because obviously this is
a board assistant.
We don't want you asking about the weather or anything
like that.
We will then process that with a chat completions model
and then we are using the Azure service for Signal
R to basically stream this back to the back to
the client.
This is another view of how the the the chat
assistant works.
You write a query, the orchestrator kicks in, is going
to go look up all the information that it needs
from the vector database, combine that with the historical context
and then pass it to the LLM to process it.
And then the response is streamed back to the the
user.
If you look at the architecture, it does look quite
daunting, but it's actually basic Azure services.
We're using Azure Kubernetes Services.
This is where we pull all the information from the
into the into the functions app and then the board
assistant micro service basically is the one that is doing
all the the chat work.
Then we're using Azure Messaging which is basically we're using
Service bus to drive our event driven architecture.
Signal R is used to stream data back to the
client.
We're using Postgres SQL and my SQL to persist data
as you just saw.
And then we are using durable functions so that we
can keep track of what is happening.
And then if an activity fails, we can re trigger
that activity.
And then obviously Azure, the AI foundry is the backbone
of our entire AI infrastructure.
We are using multiple models.
We're using the 4.1 for chat completions, we're using the
O3 Mini for reasoning and then we're using the the
text ADA embedding 002 model.
And then obviously document intelligence, as I explained earlier, is
used heavily to process all these these documents.
In the beginning, when I was talking about enabling, AII
mentioned some of the challenges that we are facing, especially
related to determinism and hallucinations and all of those things.
I do want to talk about how Azure AI Foundry
has helped us to solve those.
If I don't know if you've used the prompt flow,
but I've found it extremely helpful to breakdown complex problems
into smaller steps.
You can write a prompt, you can add context, you
can write more prompt, you can add more context.
So it it helps you tackle determinism.
Now I am by no means saying that it's going
to remove determinism, non determinism, that's probably never going to
happen, but it helps you tackle it and helps you
engineer your, your, your, your systems in a way that
can be a bit more predictable.
Document intelligence, as I explained it has really changed the
way that we provide context to the AI.
Our context is much richer.
It makes gives us much more accurate responses.
Model deployment helps us to deploy any models that we
find relevant into our data zone deployments.
As I mentioned earlier, our requirement is that we will
deploy the model wherever our customer is based.
And then obviously we can configure our consumption plans as
well.
Now, as prompt flow has helped us in to tackle
complex problems.
Chat playground helps us tackle less complex problems.
But if you're looking for, Oh my God, what's the
output going to be?
If I, if I write a small prompt and this
is my context, you can easily get the response there
without writing a whole program that communicates with, with, with
open AI using the, using the API.
So it makes your life really, really easy.
And I use that actually all the time.
Then with Postgres and PG vector, as I mentioned earlier,
this was really a game changer for us when it
came to vector databases because we are using Postgres today
already.
So and Azure supports natively enabling PG vector.
So we enabled that.
It really drove our innovation because we could quickly start
building PO CS and then start building production workloads as
well.
Since it's already part of our infrastructure.
We have it in high availability, cross region replication and
everything that we have today.
We don't have to worry about the, the, the maintainability
and all of those things that come with again, maintaining
your infrastructure.
PG vector again game changer.
As I mentioned with with meeting minutes, we were, we
are storing data in a temporary way.
So it's, it's all once the the the flow is
completed, we delete all the embeddings with chat.
So I mean, in in this case, we didn't have
to worry about indexing because it's, you know, you, you
save it once you look it up and you delete
it with board assistant, it is much more complicated because
now you're adding millions of embeddings per customer.
So the H and SW indexing really comes in handy
over here.
And then combined with the the sequel search that we
do it, it really makes the queries perform much, much
better.
I think you would be thinking, wow, this is cool,
but show me the code.
Unfortunately, I won't be able to show you the code,
but I will be able to show you the next
best thing.
I'd like to show you a slide from or a
dashboard from how our AI systems perform in production today.
So the and and how these help us to basically
build confidence in our customers.
What we are doing is we are using deep eval
to evaluate our AI responses over many, many publicly available
government documents.
So for summarization flows and AI meeting minutes, we take
publicly available documents, we churn them through our systems and
we, we, we have thousands of different data samples that
we then process.
And I'd like to show you guys in terms of
different metrics for coherence correctness, you can see how well
these these systems are performing.
We check for bias, we check for toxicity and then
for answer relevancy.
You can run it per dock document.
We can do document cosine similarity score comparison and then
obviously you can do a a simple cosine similarity score
comparison over multiple models.
So we can switch the model, we can run the
whole thing again and it it basically will give you
the will give you the results and then you can
compare them to other models that that are performing right.
I mean, I would like to take this opportunity to
at the end, thank Microsoft very much for inviting us
here and having us share our experiences with you.
I think this partnership that we've had has really helped
to drive our AI innovation forward.
I've been working with Charles and Bart over here over
the last one year and they have been extremely helpful.
So I mean, this was a great experience for me
to come here and show you guys how our tools
we've built our AI tools.
And if you have any questions, I'll obviously be be
outside in the in the expert area.
Would love to hear from you guys.
And I would say thank you very much for your
attention and hope to see you guys again.
Trusted by nearly half of Fortune 100 companies, Nasdaq Boardvantage powers secure, intelligent board operations. In this deep dive session,ย explore how Azure Database for PostgreSQL and MySQL, Microsoft Foundry, Azure Kubernetes Service (AKS), and API Management create a resilient architecture that safeguards confidential data while unlocking new agentic AI capabilities. To learn more, please check out these resources: * https://aka.ms/ignite25-plans-MicrosoftFabricAIDataSolutionsPlan ๐ฆ๐ฝ๐ฒ๐ฎ๐ธ๐ฒ๐ฟ๐: * Charles Feddersen * Mohsin Shafqat ๐ฆ๐ฒ๐๐๐ถ๐ผ๐ป ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป: This is one of many sessions from the Microsoft Ignite 2025 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com BRK137 | English (US) | Innovate with Azure AI apps and agents, Azure PostgreSQL, Microsoft Foundry Breakout | Advanced (300) #MSIgnite, #Unifyyourdataplatform Chapters: 0:00 - Overview of NASDAQ Board Vantage using Azure open-source databases 00:09:15 - NASDAQโs cloud journey and regulatory compliance using Azure 00:12:08 - Benefits of Azure adoption: scalability, reduced infrastructure management, and use of Azure AI Foundry 00:14:26 - Evaluating AI opportunities leads to Smart Document Summarization and AI Meeting Minutes features 00:22:33 - Storing extracted data in PostgreSQL with PG Vector for efficient hybrid search 00:23:09 - Generating finalized meeting minutes by matching agenda sections with relevant data 00:29:29 - Challenges with existing microservices and introduction of fan-in fan-out orchestration approach 00:31:53 - Detailed chat assistant architecture using Azure functions and vector database 00:33:39 - Using Azure AI Foundry and Prompt Flow to address determinism and hallucination challenges