Loading video player...
Learn more about Amplify Fusion: https://www.axway.com/en/products/amplify-fusion Axway Amplify Fusion v.1.12 supports AI-related connectors that allow users to connect their infrastructure to large language models like OpenAPI, Ollama, Anthropic, Azure, Google, Bedrock, and others. This demo shows how to add and configure an AI connector to your integration, using the GPT-4.1-nano model by OpenAI. After configuring a connector, we make a simple call to test the integration. You'll learn how to: 📋 Access the list of AI-related connectors in Amplify Fusion to integrate with an LLM of your choice 🔀 Connect a GPT model to your integration using a configuration and API key from OpenAI 🔧 Perform a simple setup for a GPT integration in Amplify Fusion, input a prompt for artificial intelligence, and maintain conversation context 💡 Test an LLM integration in Fusion and monitor a successful integration This video is part of a series showcasing how easy it is to build integrations with minimal setup inside Axway Amplify Fusion. Watch the full series here https://www.youtube.com/playlist?list=PLSlCpG9zsECp6eGdEr4rH2Ig86b0g14U- Full transcript: Hi. In this video, we're going to take a look at Axway Amplify Fusion’s new OpenAI connector that was recently released in 1.12. Let me come in to Amplify Fusion. Let's go take a look at the connections, and you can see that there are several AI related connectors for Mistral, Ollama, Anthropic, Azure, Google, and also OpenAI. I think there's others as well. We're going to look at OpenAI. So what we'll do is, let's come here and create a project OpenA, and it's a Fusion project. And as we typically do, let's create a test integration. And we'll put a scheduler on it even though we're not going to activate it, just so we can easily test it with the test button. Okay. And let's add an OpenAI component onto the canvas. There's also Amazon Bedrock, but let's look for OpenAI. And here it is. So we're supporting currently the chat and chat streaming. We're going to do a chat demo today. So let's grab chat and let's expand this. Let's expand the action properties. And, of course, we need a connection, most important thing. So let's click Add and let's name our connector. Let's spell it right. All right. And the connector really just needs two things. It needs a model name that it needs an API key. I have my API key. There. So let me paste that in. Oops. Sorry. So there's my API key. And now we need a model name. So for both the API key and the model name, you can go into openai.com. You'll need an account and you'll need to find it so you can have a key. And then go into the API platform. First, your key. You'll find here. You can create new keys, delete keys, but you can't view a key after it's created. So don't forget it. And I'm going to go to the API reference because I want to find a model. So we are doing Chat. Where did that go? I scrolled right by it. Yeah, Chat Completions. And in the Chat Completions, you can see here are the models. And there's a small description of what, you know, the model is faster, cost-effective. So I'm typically using this one: 4.1-nano, which is fastest, most cost-efficient version of ChatGPT 4.1. You can also use ChatGPT-5-nano, which again, cost-efficient version of GPT-5. This is just a demo, so why don't I grab that. And this is what we needed. We needed the model name to be used in our connector. All right, let's update that. And now let's go back to this. Let's come back here. Click Refresh, add our model. Let me add our connector. And here, the prompt is where you tell the LLM how it should behave, what it's doing for you. So “You are an intelligent ai agent,” and the messages where you actually send what you want. So, and also, this is a message, you see, it's a document array. So if you want to maintain context in a conversation, if you're building a conversation app and you're using this connector, then you can keep all the messages that are going back and forth in this array. So the LLM has context of what you asked and how it responded. So right now, we're only sending one message, we won’t worry about that. Let's set this value to USER because I am the user and the message is “Hello OpenAI”, let's click Send and then test it. And if I look at the chat output, it's a success. And the response message is: “Hello! How could I assist you today?” So, in this video, we took a look at a very simple example of the new OpenAI connector and Axway Amplify Fusion. And we looked at how to configure the connector and make a simple call. In future videos, we'll dive deeper into the OpenAI connector. Thank you. #Axway #Fusion #GPT #LLM