Loading video player...
Okay cool.
>> Yeah,
>> then we can uh get started.
>> All right, do we know like are people
on?
>> We are live.
>> We are live.
>> Thanks everyone. Thanks everyone that is
live uh for taking some time today to
hear a little bit more about you know
data stacks IBM's go to market and and
really techd um as a quick introduction
uh my name is Brian Searing. I'm with
technology dynamics. We're an IBM
partner that focuses on reselling and
implementing IBM's data data solutions.
Um data stacks is a big part of our go
to market just where it fits into the
overall portfolio and how it aligns with
a lot of where our customers are trying
to go. So today we are going to kind of
focus on that double click on the
technology how does it fit um answer any
questions and you know kind of kind of
go from there. Um with us we have uh Gil
from the IBM team who I think was
formerly part of data stacks and we'll
really kind of go through again like the
go to market why should you care where
does it align um and then we're here to
answer questions so feel free to reach
out via chat raise your hand and we can
um try and get those all answered or if
you'd prefer to have a conversation
after the fact we're certainly happy to
get something scheduled. So um as I know
we're quickly scrolling through some of
those although we really do focus from a
techd standpoint across IBM's portfolio
um our goal is to align with you know
kind of organizations key objectives as
it relates to data um how are they
trying to utilize how are they ensuring
the accuracy how are they categorizing
it um and as we go to market with that
it again it's completely aligned with
kind of IBM strategy and so um as they
bring on new acquisitions like data
stacks we try and kind get caught up to
speed and make sure we're we're up to
speed on the the latest and greatest.
So, um, as we go through here, again,
feel free to ask questions, let us know
how we can help, and and after the fact,
happy to jump on a call and and kind of
further the conversation.
Uh, with that, Gil, I don't know if you
wanted to quickly introduce yourself
and, you know, take it from there.
>> Yes. And thank you to TechD,
Brian, and company for sponsoring us.
Uh, it's pretty exciting to be here. Uh
you could see I'm wearing my I don't
know can they see my video in this? I I
don't know. I'm wearing my legacy uh
data stacks gear hacker gear.
>> We can see your video but you might have
to the camera has to be a little bit
There it is. We got you.
>> I don't my camera has some weird AI
tracking. I don't want to go into it
because I'm sharing the screen where the
thingy is but uh but yeah you know this
is an exciting part as you know data
stacks uh you know leading uh player on
the in the database field on you know
especially when it comes to Cassandra
and lately around Gen AI we were small
and scrappy and now we're getting uh to
work with you know more and more players
like techd and of course across the IBM
landscape. So I am again legacy data
stack solution architect have worked on
just about every technology out there uh
in my time at data stacks. Uh it's been
just incredibly impressive what we've
been able to do again as a small and
scrappy uh player. Customers love our
solution when they get on it. they just
generally seem to only grow because of
the power of the
data stack solution, the core database,
the many different ways that you can use
data stacks, which I'll talk about, and
then in the last year and a half or so,
the splash we've made in Genai space
with Langflow. And so, uh, you know,
this is a bit of a tech talk, but I am
going to start out with kind of like the
context of like why are we here? How
does this all fit in to the kind of
challenges that we're seeing in the
market today? So, you know, I'll talk
about what the challenges are with
building Genai apps. Uh again the
difference that data saxs is bringing to
the table here is Astra database which
is our platform as a service Cassandra
plus laying flow which is easy drag and
drop um product and open-source
framework for building genai apps a
little bit about customer successes and
then we'll go right into a demo and
you'll see all of this in action.
So this is really the landscape that
we're seeing and you could see you know
we're quoting a few thought leaders here
with this slide. Everybody is
experimenting. There is a lot of
experimentation going on and that's
really great. I mean it's really opening
up people's eyes. It's kind of blowing
people's minds as far as what can be
done and there are pilot projects that
are being started and really kind of
getting everybody's expectations there.
The challenge though is how many of
these can really scale to production. I
mean some of these are really fun kind
of toy projects. Not to minimize the
work that's being done, but to really
get them to enterprise strength for them
to really work at an enterprise scale to
be compliant with the regulatory uh
limitations
and um and uh um really be secure is
something that is really a gating factor
for projects in order to get to
production. And you know the sort of
like TLDDR on this we certainly are
seeing with our customers for them to
get to production. These have been
differentiators. Having a cloudcale
Cassandra compliant database where you
can very easily put your data and you
know it's secure and you know you can
scale is a differentiator for people and
having something like Langflow which
just makes it easy to build Genai apps.
We have been able to help many customers
take their pilot projects out of that
early percentage and get them into
production. So like if you took nothing
away today, go to these links. You can
get started for free with all of this.
We have very generous free tiers here.
Plus Langflow is actually an open-source
free framework. But let's just kind of
remind everybody if you've heard of
Cassandra, if it's something you've been
around the database space and you're
wondering, you know, what is this thing?
This is just kind of a reminder to
everybody. You may or may not know this,
that everything you do in today's
internet scale world probably comes back
to a Cassandra database in some way. And
this what I'm showing here this is not
data saxs customers this is large
internet scale businesses who have taken
an open-source framework and run it at a
massive global scale. So all of these
businesses at their core they are using
Cassandra
and you know there's a few numbers here
about just how widely used it is in the
market but it is an extremely powerful
database and again it powers our world.
>> Hey Gil and would you would you still
define data stacks as an open-source
company?
>> Absolutely. Great question. Thank you.
So we started when Cassandra started. So
little bit of internet history. Facebook
actually created Cassandra to support
their business model and we very quickly
saw that as uh something that people
were responding to but they needed
enterprise support. So we built our
initial business which is a business
based on taking Cassandra packaging it
and running it in your data center or in
your cloud and we were actually a bunch
of you know initial committers in the
Apache Cassandra
uh project and we commercialized it into
certified bundles and we provided much
more quickly uh provided critical
vulnerability fixes and you know the
business kind of grew from there. So
absolutely and in fact you may have seen
recently we just relaunched the New York
Apache Cassandra uh user group and
you're going to see more of these around
the c country and on the west coast I
think it's either Netflix or Uber
sponsors the west coast one but as part
of kind of a new thing in IBM we we uh
piloted the Apache Cassandra one in New
York and also Langflow is open source uh
And I'll just also say something that's
been interesting to me since the
acquisition.
IBM really is an open- source company.
Like if you look at Red Hat and if you
look at Hashi Corp, all of these are
very strong open-source initiatives. And
I I will actually end this up with a
little promo for something we're doing
on Monday in New York, which is a
hands-on open-source Gen AI event. So,
thank you. Great question. And this also
talks to that because our
platform as a service database platform
as a service which can be deployed same
exact product in either AWS, Google
compute or Azure cloud. We run a cloud
on the scale of some of those other
providers. So we actually are dependent
on open source for our platform as a
service. And you can see again here some
numbers and this is actually end of last
year. I know these numbers are even
bigger since then.
So
we're very much all about open source
and scaling open source. And the other
thing about Cassandra, one of the things
that got us a lot of attention is we
helped innovate to create the vector
capability in in Cassandra. But we
differentiate in that when you buy data
stacks in the box you get a database
that not only does vector but it does
core Cassandra SQL which is the query
language for uh Cassandra and the API.
It's also a document database and it's
also a graph database. So uh Forester
has noted us for that. We are the
leaders as far as a hybrid NoSQL vector
database. And you need to think about
this whole thing of what makes your
Genai project successful.
Very often people are really focused on
well I just need a jet vector database
but the very next thing they need is
some other kind of database and we
provide all of these capabilities. So
for the same dollar you're spending to
get your vector database, you will also
get all of these other capabilities.
And then also just to kind of drive home
the myriad ways you can use data stacks
uh te database technology and again our
onremise version is the data stacks
enterprise. The cloud version is the
data stacks Astra database.
The uh newest onremise version is
something called the hypercon converge
database HCD which runs in a Kubernetes
cluster runs in a Redshift uh open shift
cluster um and it supports all of these
APIs. And so really wherever you need to
run this NoSQL database and whatever
type of API you need to support your
database workload, we will support it on
premise or across the cloud.
Now we have been just been bought by
IBM. So that happened the middle of this
year. That deal finalized. If you have
dealt with IBM, uh, this is a business
that sells a lot of products. And so I
wanted to make sure if people are
circling back to talk to techd or other
IBM folks, they know kind of where data
stacks fits because there are various
brands within IBM.
So we sit within the Watson X data
brand. So I'm kind of really
highlighting here for everybody. We are
now one of the engines, one of the query
engines within Watsonx data and that's
how you would buy our product now as
part of IBM.
So let me just talk about a couple of uh
data stacks Astra customers you're
seeing here digital river. So, Digital
River migrated their on premises
Cassandra to Astra database and what
you're seeing here and this is typically
what we see when people move off of on
premises deployments into Astra is their
savings are dramatic. And there's a few
things that go into this. Some of it is
operational, but a lot of it has to do
with just a much more optimal
infrastructure model and moving to the
cloud-based model, not having to
pre-provision things, really only
running kind of what you need at any
given time, being able to scale up
either on demand or in a managed way to
meet your increased um uh business
needs. And so here's an example of a
customer that realized 60%
reduction in their uh total cost of
ownership.
This is a great example that just talks
about the scale and the level to which
that Astra can really integrate in with
diverse environments. And you could see
a fairly involved flow diagram here. You
know this is if we wanted to have a
whole architecture discussion but the
point here being Astra really fitting in
with a very diverse set of enterprise
architecture in order to support this uh
educational app which connects academics
with their uh students. The uh traffic
as they roll this uh app out increased
50 times. two million students, six
million daily active users, and they had
zero downtime.
So again, speaking to the scale and the
ability to really support a very large
diverse um customer and uh enterprise
infrastructure base.
So let's start talking a little bit
about lane flow. So,
and you know, actually, let I just I
just did a big uh no pun intended data
dump on everybody. Let me just take like
a quick pause. Are there other
questions Brian?
>> Not currently. I've been trying to
monitor it.
>> Okay.
>> Relatively closely.
>> Okay. Uh
great.
>> If anyone has questions, feel free to
use the chat just as another reminder.
Yeah, please feel free uh any questions
that come up. Uh okay, great. Uh
different data. Oh, look, I can see the
chat now. On the cloud, security is a
huge concern.
Uh yes. Okay, so as the data stax
enterprise does, Astra offers a
transparent data encryption.
uh also uh the data stax enterprise
product came out of uh operating in very
highly regulated environments. So if you
go to our trust center, you would see
that we support uh a lot uh certainly
most of the major uh security and
compliance frameworks we are compliant
with and we run across the globe and um
we again we provide our own encryption.
So, you can use our encryption keys or
you could bring your own key to encrypt
your Astra database if you wanted to do
it that way. Um, autoscaling. So, yeah,
I mean the database, it's a cloud-based
database. It does autoscale.
If you uh, you know, you get a spike and
the database starts to scale up, it will
scale.
that kind of ondemand scaling is a
slightly higher cost than the kind of
pre-provisioned scaling. We typically do
not see that many customers who suddenly
have a spike in their database. I mean
sure it happens but it is unusual
typically like we work with retail
customers like right around this type of
time of year to pre-provision their
capacity for a specific period of time
and we work with them to really uh
another one is um the like the
accounting businesses there are some
businesses like that where we work with
them to scale up around tax time and so
we provision in advance of that period
they kind of ramp up to what they've
scaled to and then when that period is
over they ramp down. And you know again
you can get pretty granular with exactly
the capacity you need and when you need
it and so we have kind of both of those
uh models. Um
so um yeah real world scenarios well
again a lot of it has to do with
scalability and global reach needing uh
systems that don't go down that will be
available even if you lose like a data
center or something like that. In fact
um there's some anecdotal evidence that
during Amazon's last big outage and I'm
not picking on Amazon every cloud has
its outage. Um but some of these large
players who I showed early uh because
they were running Cassandra in a global
way and it supports a multi-data center
deployment.
It's focused on being up. So if you lose
um uh one part of your deployment, it
will still run and it'll serve uh
customer requests. Um and again it's
it's really all about something that
scales needing something that scales and
has maximum uptime. So yes, good good
questions. Thank you. Uh so yeah, let's
talk about Langflow a little bit. So
Langflow is an open-source
uh graphical drag and drop um interface.
It there's a whole community around
Langflow. It's based on a code framework
called Lang Chain
and it allows you to
rapidly iterate to just work in a drag
and drop mode. Every one of the widgets
in Langflow is actually a code widget.
So you can drop into code if you want
to. You could create a custom widget
that you then drag on with lang chain
and Python code. Um, and it really
provides kind of pre-built ways to
integrate with all of the different
things that you need to coordinate when
you're building a Genai application. And
well, here's just a little taste. I'll
show a little bit of this in the tool as
well, but you know, when you're building
a Genai app, you have a lot of things
you need to orchestrate.
And so Langflow helps you do that. Uh
just quickly talk about a customer who
is built their business on Langflow.
This is super interesting. They have a
product called Athena. It's kind of like
a assistant that knows everything within
your enterprise. So it's it's
intelligent about all of the knowledge
that you have within your
internal systems
and you basically invite it to meetings.
You can give it tasks and it will go out
into your you know internet essentially
and get you the information and say
gentic based workflows. And these guys
actually came out of the defense
industry and so they know how to work
and they some of their initial customers
were you know required extreme
government compliance and you can see
this quote here from uh Brendan Guiles
and again I mentioned our event November
10th he will be at this event
uh that it completely transformed the
way they worked. So they would take lang
flow to their customers and they would
start to build these flows and they
could give lang flow and the customer
could build some initial flows and they
went from PC to production in two weeks.
So really quite a dramatic result and
it's enabled their business to move
extremely fast.
Uh and just a little bit at the bottom
that you know we do have subject matter
experts and building Genai apps and you
know when we have customers who are
really building at this scale we can
partner with you to help advise you on
the best way to go about this. So I'm
just going to talk a little bit about
what it really takes to build a Genai
app and then we'll go to the
demonstration. I think we had a poll we
wanted to show.
Are we able to show the poll?
>> Yeah, absolutely. That all set up.
>> So, while I'm talking, we're just kind
of curious where folks are in their
journey to build Genai apps.
Uh this is a much simplified view of
what you typically need to do if you
want to build a corporate genai app
based on your data. So typically you
have and I'm just going to move my mouse
around here folks can see that you have
kind of your own this would be your
vector store
where you want to take your data and
turn it into something that can be used
in a genai
um setting and in order to do that so
these are somewhat equivalent terms
they're used interchangeably you create
an embedding in something that's called
a vector. And like at its core, Gen AI
is all like statistics and math. And so
the representation in a database is
something called a vector. And it
literally, if you remember any of your
like high school math matrices,
uh arrays, it's a list of coordinates.
It's literally like a mathematical
object. Uh typically in a genai setting,
it has like 1500 coordinates or
something like that. Um, and so you
first need to take your data, you need
to run it through an open AI model. And
um, I I'm just noticing, it's funny. I
This is very OpenAI Azure focused, by
the way, folks. I I'm just realizing
I've had this slide for a while. This is
like the Azure A. So no, again this
could be any cloud, but that's kind of
what you're seeing with this this one
visualization. So uh again, no slant on
anybody who's using a different cloud
here, but you create the embeddings
uh which is running it through the model
and storing it in a vector database. And
then when uh are we getting answers here
for our uh poll,
Brian?
>> No, it popped up. So, we should be
getting some of those. Just looking for
for results.
>> Okay. All right. Um and then you're able
to query it in a genai way. You're able
to say, "Tell me something about my
data." you get some results back and
then you send it on like a typical like
would be like a chat GPT type request
through a genai model. You get a
response back. You send it to the app.
So just flowwise and I'm showing you
this because you're going to see very
similar in lang flow. You create the
embeddings
then your users input creates a search
against that database. You get some
results. You send those results to the
LLM and then you package it up as like
an API that can be embedded in any
application. So, Lenflow really helps
you do all this. Let's go take a look at
it. I think
>> and Gil um we had a couple responds. The
>> the overall thought was a few in a
within the next six months.
>> Okay. some within you know the other
quarter within the next year and then
>> um not sure yet for about 50% of the
folks here. So
>> definitely top of mind but you know how
some of these projects go just taking a
little bit more planning and timing.
>> Absolutely. And I mean, if you do not
have a toolkit to help you manage all of
this, and I uh I know this is kind of a
lot to talk through in a sort of like
somewhat beginning presentation, but
this is the struggle that people are
having is really orchestrating all of
this. So, thank you. Thanks for the uh
insight there. Hopefully, this is
something people will crack open to help
them move more quickly. Um I have a
slide about agentic. I mean, agents,
there's a lot of talk about agents. It's
really any kind of, you know, piece of
code or something that works with the
LLM that can be plugged in to do
something for you. This is a great link.
I would suggest if people are really
trying to wrap their head around agents,
they they read this guide. It's a really
great guide that IBM uh put together.
Um, but I really kind of want to get to
our demo. This is our like legacy data
stacks uh picture. We uh last year, if
you saw us at any shows, we had this
purple flying rhino. So, I thought that
was fun to kind of include here. So, I'm
going to start actually with a database.
I'm just going to quickly show you what
it looks like when you're working with
the Astra database. So, this is, you
know, like any cloud portal. I actually
have a database running here. We'll look
at that in a second. But let me just go
and create a database and just show you
how easy it is. So typically if we were
creating a database on premise, you
know, I'd have to provision servers. I
have to wait for the servers to start. I
have to size them. I have to attach
storage on and on. If I just want to
create a serverless vector database and
Astra is serverless, that's uh key
feature here. Uh let's get just give it
a name the webinar
DB
and I'll just say I'm deploying it into
Amazon US East and create the database.
This is going to provision everything I
need. I that's it. I did not have to
think about how big is this thing. I can
later on assign additional compute
capacity if I need to to this thing. but
it's just going to run and when it's
finished I'll have an a endpoint here
that I can access it with and in
addition the portal gives me all kinds
of code to access it. Um this could take
you know five sometimes you know a
little bit more than that minutes but in
the interest of time let's take a look
at this one that I've got here. So again
the portal is telling me my endpoint. I
can generate tokens here for access
and
uh I have a uh monitoring portal here.
This is just a super high level view of
requests. There are a lot of metrics
here in case you wanted to integrate
this with your existing kind of metrics
and monitoring solutions. It can be
exported. And then I've created a key
space here. You can see this keyspace
actually has data in it and um
I could navigate it look at a different
view of the data. These are actually
vectors of uh representation of data
that I've brought in which I'm going to
be talking about and I also because this
is a vector database I have defined it
with a specific AI model. This one has
768 dimensions. Again, that's those like
string of um number pairs and it's using
something called dotproduct similarity.
When we do a search against a a vector
database, it uses a mathematical
algorithm as opposed to like your
typical SQL select, find me a match
based on a key kind of thing. So, this
is the embedding model that it uses to
find something. Um and so that's defined
with the vector database. There's you
know a huge amount of functionality in
here. I just wanted to give people a
sense that you know this is really a
platform as a service and you just go
here create one you get the keys from
this uh you can there's a integrated
security model in case you want to
integrate this with your enterprise
security define entitlements of who can
do what etc.
And this is really what we are going to
base our Langflow app genai application
off of. So let's go to Langflow. I am
running Langflow desktop. So I can go to
langflow.org
and I can just say install the desktop
for Mac OS or Windows and it creates
this um uh graphical um view on uh
program app basically on my desktop and
it's these are some projects I have but
I can just come in here and create a new
flow. Let's just start with a blank flow
so I can get you right into the canvas
here. And so I have basic tools for uh
building a flow here. So like chat input
like your basic hello world chat output
and then I just connect them and if I
run it and as we would expect if I say
hello webinar it's going to come back
and say hello webinar. So very simple uh
this is really kind of deceiving how
simple it is because I could actually
deploy this. It has an API endpoint.
This is the graphical component of lang
flow, but behind the scenes is the
actual server. When you deploy this into
production, you deploy the server
without the graphical piece and it's
literally serving these flows. And I
have an API immediately with this thing,
right? Uh, and even tells me how to
access it, Python, JavaScript, curl,
etc. This is just like ready to go. So
if you remember to that complicated
diagram I showed the last piece is make
it available as an API. You get that
immediately here. Uh let's show you a
little bit more about what's in here. So
I can do agentic work. I have models
that I can bet in here. Uh all kinds of
different ways I can get data control
processing. I also have again if you
recall that slide with all those
different genai
solutions on that slide that people have
to orchestrate. Well, everybody who
participates in the Langflow community
which is you know almost anybody who
wants to provide a genai solution they
are in here. And in fact, if we go over
here to um IBM,
we actually have the IBM LLM is in here.
Let's get rid of this line and
um wire this up. Okay. So, now we want
to talk to the um I Whoa, what happened
there? Sorry.
We want to talk to the IBM LLM. So, our
chat input will go in and the output
will come from the LLM. I'm actually
running a Watson service right now. I
have a um Watson project set up and I've
set up environment variables which have
my keys in there. And so if I do this
and we'll just select this model
and run the playground, I can now say
some, you know, fun uh uh what is your
favorite webinar?
Let's see what it says.
And so we now have are just using the
Watson LLM
to
build our own little chat GPT here which
we could embed in um the uh whoops in
any of our applications. So you see it's
saying something fun technology TED
talks HubSpot inbound etc etc. Now
our challenge here today is we want to
build these types of apps with our data.
So for example,
what does Watson know about me? Gil
Isaacs, right? Tell me about Gil Isaacs.
Okay, so uh I haven't given it any data.
This is just going to And by the way,
this gives me a different answer every
time I do it because it it finds some
interesting things online. Uh, so
there's uh somebody who's a founder and
CEO of Tonic Health. Uh, I can tell you
that's different than what I did the
last time, right? Which this just has to
do with the way it goes out on the
internet and what it finds first. And we
haven't really done any kind of like
refinement
um of uh what we really consider to be
accurate using some AI techniques and
ranking and things like that. Uh oh, I
have a question. How easily does
Langflow integrate with other software?
Thank you. Uh so again the point I was
making before this is available like if
I like this flow I would just deploy the
server piece of this and it would be
running with a known API URL. Uh all of
these widgets let's close this down.
uh these are codebased. So here is the
lang chain code that this widget is
based on. So you can really integrate
with anything and again the community is
quite vibrant and there's kind of new
integrated pieces here as well. And then
the last piece when we start talking
about agentic
you can deploy these flows as um an MP
MCP server. So then now you're using MCP
which is the way everybody is doing
agentic and agent tools. Um you can
access flows running in lang flow. So uh
thank you thank you for that question.
Uh so let's go back to this question of
what do we know about Gil Isaac? So
let's think about our data. What do we
know about our data? Clearly with just a
standard LLM call we wouldn't know about
our data. So the uh way we do that and
actually neglect that whole big flow I
showed is something called rag retrieval
augmented generation. So I actually have
if I go and create a new flow. We have a
pre-built one for rag. I've got one
that's already running. Just in the
interest of time I'll go into that one.
And so if you recall there were two
pieces load data and then retrieve data.
Right? So the first piece is down here,
this pre-built rag flow. And in this
case, I've made it so that it uses the
IBM Watson embedding model
uh to embed my specific data. I actually
have, let me just quickly show you, and
not enough folks know this, you can
actually use LinkedIn to create like a
resume version of your profile. So I
have this and I have used this flow
to create a database version embed with
embeddings of my data. So, uh, this has
ran and it has inserted this data. And
if we go back to, um, the astro view,
um, which was here,
that's exactly what's here. So, there's
kind of like the content and then
there's like the metadata and the
vector. This, which is binary, so it's
just it's not really going to show it,
but this is the vectorzed version of my
data. So, that's that first piece. Think
of this as your proprietary data has now
been vectorized. The next piece is we
now want to engage with the user. Okay,
so that's this piece up here. And you're
also just seeing a little bit of sort of
the look and feel of langlow here. Um,
I'm going to have some input from the
user. I need to read against the
database. Now again, this Astra database
that I've been accessing, remember I
showed you I can get the keys right from
there. It's real easy. I have the keys
to then just put them into this flow and
I do some prompting. Take this content
which came back and now send it to chat
GPT type uh search. In this case, we're
using the Watson um AI model, not the
chat GPT one. And you can see I've ran
this before. Tell me what you know about
Gil Isaac's pre-sales software solution
cloud architect uh pre-sales technical
consulting worked in various roles it
has my actual uh companies here um again
just trying to move this along for folks
uh in the interest of time but um
think of this again as your data what do
we know the business you're in let's say
I'm in some financial advising business
that I have proprietary financial advice
I will create my own firewall database
in Astra
vectorize that data and now I can give
that to users who want to know what the
financial analysts have to say.
So let's take this to the last step
which is the agentic step. Um let's see
how do we start integrating lang flow
into existing AI workflows. Well, uh it
depends what you've used for your uh
workflows.
Uh typically now the way a lot of uh
these AI flows are integrating is
they're using uh capabilities like MCP
servers. So often whatever you're using
it is using MCP. Um again if it is not
using MCP if it's not available as a
tool you should be able to deploy some
existing flow as on a server where it
has an endpoint and then Langflow can
incorporate that in with the endpoint.
So let's I'm just I'm going to show you
a quick agent flow and then I'll show
you kind of like the final agent flow
just so you sort of see how it works.
This is a simple one. And um basically
I'm going to uh change this. I'm going
to use a different model. I So I I'll go
over here. Sorry. Let's go here. And
again, we'll drag in our Watson model.
We can search bring in Watson here as if
my drag will work. Come on. Bring it in.
And
I will that's those same parameters. You
can just see how easy it is to reuse all
of this once you have it set up. My
project ID,
my um API key. In this case, I just want
to use the language model. I'm not going
to use because the agent is going to
call it for its language model. And I'm
going to say connect another model. And
again, what you're seeing here is how
Langflow makes it really easy to like
switch back and forth between
frameworks, etc. And let's just run this
thing. And now, oh, whoops. I forgot to
let me let me show you something. So,
uh, this has a URL tool automatically
connected to it. I have, uh, a weather
site and, uh, yesterday
it was really windy in New Jersey. So,
here's the URL for the wind. And I will
go here to my chat and I'll say, uh,
tell me what I can can expect from the
wind based on this URL.
And what you're going to see here is
that the agentic flow Oh, what happened?
It does not like my deployment. This is
what happens when you do a live demo.
Uh,
Oh, I did not select a model. Helps if I
tell it what model to use. Okay, we'll
just save this guy.
This is a proof that we're really live,
folks. All right, playground. Run this
thing. And now, here's what you're
seeing. This is an agentic flow. I have
two I'm not going to use the other
tools. It's a calculator. It's much less
interesting. This is our kind of like
default reference tool. uh agentic flow.
But you could see this tool is calling
that it is and this is kind of like a
rag scenario. It's reading the data from
that. It's vectorizing it and then it's
sending it to the LLM and it's telling
us, hey, here's what's going on at this
URL.
So, this could be kind of interesting
for some of your internal sites. If you
had uh wanted to scrape some URL
information from existing sites, you
could build an agent to do that. Now,
the flow that I built here built on the
sort of Gil Isaac scenario we were
talking about. So, let's go to the flow
um that I had pre-built here. Now, what
this one does is I am interested in uh
kind of comparing my background to some
jobs that I'm looking at. So, I actually
have a uh old job posting. Uh whoops.
This is the old job posting.
And it's a data stack strategic account
executive. And I'm going to ask this
flow. Uh how Oh, we already did. Yeah, I
think we're running on time. So, let me
just show kind of what happened there.
How well do I match to this role? And so
you can see basically the invocation of
the tools here. It's showing you and
then it's getting my experience. It's
matching it. It's saying, well, I have
this job experience. Here's some
relevant skills and here's a conclusion.
So this is really an agentic flow using
my data.
And so this is this is kind of like the
it was in the oven and I sort of showed
it to you. Uh and you're seeing kind of
where you really can get to with your
own data and a combination of Astra
database with Langflow in this scenario.
Uh I just want to end as I mentioned on
this event that we have on Monday. If
you are in the New York area or
convenient to New York, you can register
here
for this webinar. In fact, I will I
mean, sorry, sorry, this is an in-person
event. We have an afternoon
of free hands-on with we're providing
food and everything. So IBM has a whole
bunch of open- source AI frameworks and
then we have an evening meetup with
pizza and beer and some as I mentioned
the CEO of um that startup Athena
Intelligence will be there. There's a
separate sign up for the evening that's
linked to here. So love to see anybody
from this webinar join us at this event
that we have scheduled. So, I think I I
dumped a heck of a lot of information on
folks. Let me really just kind of take a
breath here and see where folks are at
and um
what kind of questions people have.
>> Yo, great job. Thanks for walking
through all of that. It was certainly
helpful. Um, great. You can tell see how
excited you are about where the, you
know, where data stacks is going, where
it fits into the IBM portfolio and
really the overall like aentic
experience that it's it's continues to
provide and enhance. So, thanks. I think
um definitely learned a lot and looking
forward to continuing to, you know, have
these types of conversations. Are there
any other questions from from the group
um as we got someone like Gil on the
call here to answer those?
If there aren't any other questions, you
know, Gil pointed us all to this awesome
event that's happening in I think you
said New York coming up here quickly.
Take a look if if it meets, you know,
where you live or what you're looking to
to go for. Um, please sign up, register.
It' be a great event. Um, if you have
any additional questions, feel free to
reach out to myself or um, if you're
already associated with maybe an IBM rep
or individual, obviously funnel those
through them. But, um, we're here to
help. We're we're looking forward to
data stacks continuously being
integrated into the IBM fold and and
bringing it to more and more
organizations. So, Gil, thanks for all
the time today and and walking us
through the tech.
>> Thank you, Brian. Thank you, TechD. And
thanks to everybody who took some time
out to uh listen to me blab on.
>> Perfect. Well, with that, we'll we'll
wrap it up here. Um there should be a
recording that follows and so we'll get
that out to everybody. Gil, thanks
again.
>> Fantastic. Have a good one, everybody.
>> Bye. you.