Loading video player...
Oracle consensus is at the core of the
chain link network. It is the mechanism
through which oracles agree securely on
data from external sources and reliably
delivered on chain.
Back in 2021, the chain link lab's
research team pioneered in Oracle
consensus
by the releasing ofchain reporting
protocol.
This protocol initially powered our data
fits product and we have since been very
hard at work improving it and it has
grown to be the backbone of all that we
do at Chenink.
Today I'd like to tell you more about
what makes Or and what's coming next.
Let me first give you a small summary of
of how OCR operates and uh how the
typical uh products operate as well. So
we have external data that we want to
bring to target system. That's that's
what we want to do and then we have
oracles in the middle that ingest this
external data. Now this external data
might be coming perhaps in some cases
from an external API such as in the case
of data feeds. In other cases might be
coming from other chains such as in the
case of CCIP
and then within a round of Oracle
consensus what happens is a report
signed by a quorum of oracles [snorts]
uh is produced and then this attested
report eventually makes it to the target
system. It is transmitted to the target
system. Now what is the target system?
In most cases, the target system is a
blockchain and you have a smart contract
on the blockchain receiving the test
report.
But in other cases, it can be a
non-blockchain system such as for
example in data streams.
This model is in fact quite flexible in
that it powers so many of our products.
For example, data feeds, data streams,
CCIP and proof of reserve are all
powered by CCA by OCR.
In fact, the power of OCR is that all
these products do not need to reinvent
the wheel. All these products only need
to implement a plug-in interface
specifying the product logic and that is
sufficient for them to harness all of
the power that we'll be talking about
today.
Of course, the system must be reliable
and secure. This is a top priority.
there. The the numbers are great. The
system has been running reliably in
production since 2021 and it has been
securing over hundred billion dollars in
value.
Security is quite important and there
the system excels. Uh the system is
Byzantine fall tolerant
with optimal fall tolerance.
And what does this mean more
specifically? It means that up to a
third of the oracles can be actively
trying to break the network
in any way that they can and they will
not manage to do anything. The the
network will keep going reliably and
securely.
With all this reliability, security and
flexibility, OCR does not compromise on
latency.
In fact, the numbers [snorts]
are are great on on data streams. We
have measured a tail latency of 376
milliseconds
and OCR is really tailored for such use
cases such as data streams. I want to
strongly emphasize that this number is
not experimental number. It's not in
ideal conditions. This is on the
production system and measured with a
geographically distributed set of set of
oracles.
in uh yeah in North America, Europe and
Asia,
OCR scales horizontally. In fact, once
again, the the production numbers prove
this out. We have over a thousand
instances of OCR running in production.
The great thing about this is that each
instance can cater to a different
product.
And basically, this allows us to tune
each instance to each product's needs.
So for example, data streams instance
can be tuned to be insanely fast and
other instances can be for example
tailored towards being more efficient
and so on.
Now OR is also cost efficient.
Uh OR achieves this in two ways. We have
the report at the station protocol that
I will be first talking about. The
report of the station protocol
reduces the verification cost of the
tested report on the target system. It
achieves this dynamically depending on
what is best for the target system. For
example, in the case of Ethereum, it's
quite effective to uh reduce the number
of signatures in the tested report by
half. The system can easily do that and
it does. In other systems maybe we can
make use of special cryptography and use
aggregate signatures and so on. So
that's one part of it. The other part of
it is the transmission protocol which is
really unique to OR.
This ensures that we avoid duplicate
transmissions to the target system. So
for example, if a target system is a
blockchain, we do not send conflicting
transactions. And so this is also very
crucial for cost efficiency.
So to recap, what are the goals of OCR
that I talked about?
It's flexible,
it's reliable, secure, it's efficient,
and it has incredible low latency.
And now I would like to switch gears a
bit and talk to you about some of the
newer use cases powered by OCR and
what's coming next.
With the chaining runtime environment,
our mission is to enable thirdparty
developers harness all this power.
So now third party developers can
develop their own Oracle applications.
How does this look like in more detail?
We have developers coding their own
workflows, specifying the kind of logic
that they have to process data.
And this data can be perhaps in one in
one way traditionally coming from an
external data source. But this data may
also be internal to the system. So other
users might submit data to CR that then
the workflow can process. Now what's the
output of a workflow? A workflow can
store more data in the system or produce
some effect to the target system. For
example, if the target system is a
blockchain, can send a transaction and
so on.
How does this change the existing model
that I I just talked to you about? Now,
oracles also need to be able to handle
large amounts of data in protocol.
That's exactly what we have achieved
with the newest version of OCR, OCR 3.1.
and that I will talking to you about
now.
So, OCR 3.1 retains all these great
features but is really built for data at
scale.
So, a natural question to ask is how
much data
and uh the the system we have measured
can actually store upwards of 300 gigs
of data.
To put this in perspective, a long
running system such as Ethereum over uh
has amassed roughly the same amount of
state over 10 years.
And this is a number measured in very
realistic conditions in a geographically
distributed setup
uh with with not distributed and some
nodes even crashed to mimic real world
conditions.
OS 3.1 ingests also data at high data
throughput. It's it's the kind of second
pillar of processing large scale data
and in fact the numbers there are
upwards of 400 megabits per second.
Again to contextualize this number this
is about the same data throughput as
visa processes.
So
how does OCR achieve this this high data
throughput?
One of the key ideas is that we decouple
data dissemination from consensus.
What does this mean? Why is this
important? So uh when an oracle comes
across some piece of data, the oracle
does not need to wait for the next
consensus round and only then start
making progress on disseminating the
data. The oracle can immediately start
broadcasting the data.
Now after this concludes the Oracle now
has a certificate of availability
which basically ensures and stands as
proof that the piece of data is
available from uh from the network until
some later point in time.
Then when the oracle wishes to use the
data in the consensus oracle
when when it uh wants to observe piece
of information instead of transmitting
this full large piece of data it uses in
place of it the certificate of
availability.
Now, optimistically, every Oracle will
already have the data. So, they can
immediately begin processing,
but even in the case where some Oracle
does not have the data, they're
guaranteed to be able to fetch it and
then quickly keep making progress as
well.
There are some more under the hood
improvements to the protocol that uh we
do not quite have time to talk about
today, but nonetheless, I would still
like to pull out.
dynamic state synchronization of the
protocol,
optimized change and an improved
peer-to-peer networking stack.
I want to strongly emphasize that OIA
3.1 even though it processes such large
amounts of data still maintains all the
great properties of OCR that I talked
about in the beginning.
In fact, the systems maintains this
great low latency of past versions. The
system is self-healing even though it
processes so much data.
So any oracle can crash or have a
hardware failure anything like that and
that's fine. It can quickly come back up
and quickly catch up to what has
happened and keep making progress once
again.
And of course the system maintains its
optimal fault tolerance building on the
rocksolid battle tested foundation of
O3.
OSA 3.1 protocol design builds on a
solid line of academic work going back
to the 90s
and going from design to a scalable
production system takes a lot of hard
work.
So I would like to give a shout out to
my colleagues in the research team. I'm
very proud to have had the chance to
contribute to OCR alongside such a
talented group of individuals.
If you would like to learn more about
OCR, I encourage you to check out our
OCR 3.0 paper.
The QR code is in the slide. And there
are some related talks that you might be
interested in. We'll quickly call out
the one by Philillip happening in this
room at 4 p.m. who will be talking about
how chain link DKG leverages some of
this high throughput functionality of
OSIA 3 that I talked to you about today.
So that brings me to the end of the
talk. I hope you took away how OCR
enables our products to be reliable,
secure, fast, and efficient, and how
with OCR 3.1, we're building on the
solid foundation to enable the next
generation of products. Thank you.
At SmartCon 2025, Chainlink Labs Research Engineer Kostis Karantias explains how the Offchain Reporting Protocol (OCR) enables oracles to securely reach agreement on external data & reliably deliver it onchain. View the SmartCon 2025 playlist: https://www.youtube.com/playlist?list=PLVP9aGDn-X0R1kuQo8qLPnqlT7ThKQR2s Chainlink is the industry-standard oracle platform bringing the capital markets onchain and powering the majority of decentralized finance (DeFi). The Chainlink stack provides the essential data, interoperability, compliance, and privacy standards needed to power advanced blockchain use cases for institutional tokenized assets, lending, payments, stablecoins, and more. Since inventing decentralized oracle networks, Chainlink has enabled tens of trillions in transaction value and now secures the vast majority of DeFi. Many of the world’s largest financial services institutions have also adopted Chainlink’s standards and infrastructure, including Swift, Euroclear, Mastercard, Fidelity International, UBS, S&P Dow Jones Indices, FTSE Russell, WisdomTree, ANZ, and top protocols such as Aave, Lido, GMX and many others. Chainlink leverages a novel fee model where offchain and onchain revenue from enterprise adoption is converted to LINK tokens and stored in a strategic Chainlink Reserve. Learn more at chain.link. ✅ Subscribe and turn notifications on: https://www.youtube.com/channel/UCnjkrlqaWEBSnKZQ71gdyFA?sub_confirmation=1 Learn more about Chainlink: Website: https://chain.link Docs: https://docs.chain.link Twitter: https://twitter.com/chainlink #Chainlink #crypto #blockchain