Loading video player...
You suck at prompting. It's okay. I did too, but I got tired of asking AI to do things and getting garbage, getting results like this when I'm expecting this.
Have you ever yelled at chat gt like really insulted it because you're so frustrated with the results? If you haven't, you're not using it enough. It's those moments. That frustration that makes me think one of two things. One, AI is dumb and I'm not going to use it anymore. Those naysayers are right, or two, I'm dumb and I have no idea how to use ai. I most often feel like option two, and this was confirmed when I asked the prompt father.
Himself what I was doing wrong. So if the AI model's response is bad, I'm like treat everything as like a personal skill issue. The problem. Is me. It's a skill issue. So I went deep, too deep. I took all the top prompting courses on Coursera. I read all the official prompting docs and thropic, Google open ai, and then I asked all the experts, the best prompt engineers. I know Daniel Mesler, Eric Pope, Joseph Thacker, and I think I figured it out. So in this video, let's get good.
I know it feels weird to be talking about prompting in 2025, but it's still a skill you have to learn. I polled you guys asking, Hey, how do you feel about your skills? Most of you're pretty confident that might change after you see this video. Others are like, I don't know what I'm doing. This video's for you. Actually, it's for all of you. Watch this video. I made this for you and we're going to have some fun with it. So I've got this CloudFlare apology email.
We're going to take this garbage and turn it into something amazing with foundational concepts. We're going to keep increasing our skills as we learn new things and in the end, I actually learned something pretty crazy. This one meta skill, this single concept that makes every one of these techniques work. I got it from the experts. So get your coffee ready, it's time to learn prompting in 2025. Let's go. Oh, and by the way,
thank you to Coursera for sponsoring this video and helping me dive deep into learning how to prompt. Okay, now we can go.
So let's turn this basic prompt that produces this garbage output. Free CloudFlare apology email into something amazing. But hold on, before we fix your prompting, you need to understand what prompting really is, and I know you think you know, but you actually don't know because most people, including me get this fundamentally wrong. But I'll give you this prompting essentially is just asking AI to do stuff and it almost feels like talking to a human sometimes we forget that it's not,
but you have to remember you're talking to a computer and the Vanderbilt University course on prompting, which is awesome. Dr. Jules White defines a prompt like this. It is a call to action to the large language model. It's a call to action, but it teaches that a prompt just isn't a question, it's a program. You aren't asking the ai, you're programming it with words. Every time we actually write something, chat GPT needs to format it in a particular structure that we've given it.
We've wrote a program that tells it what to do. We need that mentality because LLMs don't think like we do. They're prediction engines. As Dr. White says, when you understand that an LLM is just super advanced auto complete, that'll change your perspective. Check this out. I'm going to choose a different ai. Let's go with Google Gemini. I want to see if it can predict the next word in this phrase and I'm trying to get it to copy me my catchphrase.
You need to learn Docker right now or anything right now prompting right now. We'll see what happens. So it gave me a generic answer, a generic completion, and that's why they call the results of a prompt a completion. They're just completing or predicting what you're wanting. They're not thinking about it. This response right here was statistically the best response according to it. But if we get more specific, and I just mean a tiny bit specific,
I'll open up a new prompt so we got something fresh. Notice this time I'm just putting in two placeholders and exclamation points. What I'm hoping is that it's seen enough examples like this to go, oh, I know what that is going to be. I can predict that. Let's see. Come on. Got it. It studied me. I guess that because, and I say guess because it is guessing because it's seen patterns like that before. I'll even ask it why it shows right now, lemme see
by technology focused YouTube creators. I just wanted to say my name.
There we go. So what I want to hit home is that you're not asking a question. You're starting a pattern as you saw here. If your pattern is vague, the AI guesses anything, but if it's more focused, you'll get way better results. You're hacking the probability. So here we go. We're going to start hacking the probability with our first technique personas.
Okay, this CloudFlare apology email is kind of trash and I think it has to do with who is writing this, which might sound like a weird question, but seriously think about that. Who is writing this email? When we ask AI to write it and know it's on the call center of people, but what's the perspective? Because this sounds like nobody. It's generic and soulless. That's where personas come in. We got to give this AI some personality. Let's try it out.
So I'm going to grab a new chat and I'll say, Hey, you're a senior site reliability engineer for CloudFlare. You're writing to both customers and engineers. Write an apology letter or email now before I hit enter and show you the good stuff. Let's talk about why we're doing this. That seems kind of weird, right? If you've never used a persona with an AI prompt, it feels strange, but actually when you think about it, it's not too strange.
Let's do a little thought experiment. Let's say you're planning a trip to Japan. AI doesn't exist. Google doesn't exist. You have to ask a person. Old school styles, who are you going to ask? Well, it'll probably be someone who has been to Japan, someone with experience planning trips, someone who loves Japan, they have to like it, right? Maybe they are a professional travel planner that works for a travel agency, the best in the world. They've planned to millions of trips.
That's who I would ask, and that's the mindset we've got to have when we're talking with ai. Who do we want crafting our email? Because guess what? AI can be anybody. Also nobody. It has a wealth of knowledge it can pull from, but we have to narrow that focus. And the Google prompting course on Coursera, which is also amazing. It says Persona refers to what expertise you want the generative AI tool to draw from easy for me to say get a narrow its focus so it can guess better.
So let's try it out. Let's see what happens. Boom.
And immediately it's more professional from the subject line to the direct ownership instead of we. It's directed to a more technical audience. It's overall better. Now also it's important to know that when building outside of the gui, for example, if you're using an API or cloud code, which I highly recommend, check out that video right here. You would normally have the persona in what's called the system prompt. You see, when you're prompting ai, there's actually two prompts that work here.
It's system prompt and then the user prompt. Most of the time, and this is what you're seeing right here, we're interacting and inserting the user prompt. So all this right here was user behind the scenes is a system prompt that instructs the AI on how to do things, who it is and how it's supposed to interact with you and me. When you're using a system like Claw Code, you can actually change that system prompt, which makes it super powerful. But this actually works fine too.
You can tell it who to be and the user prompt and it'll still work. Hold on a second. Did you notice something kind of weird? Totally. Both of these totally made up the event. This is not what happened. How do we fix that? I'll talk to you the next segment.
It's kind of amazing to watch an LLM just hallucinate like that make things up out of thin air like where's he even getting this? But you shouldn't be surprised that when you remember that it's a prediction machine. It's really good at guessing to solve this. This is where context comes in and it's probably the most important technique I'm going to show you. It literally takes the guesswork out of prompting almost. And 2025 has kind of been the year of context, like context is king.
You'll hear that it's the C and the Google prompting framework, the T-C-R-E-I, that's kind of complex. Next, you'll include context or the necessary details to help the gen AI tool understand what you need from it. This is the difference between writing. Give me some ideas for a birthday present under $30 and give me five ideas for a birthday present. My budget is $30. The gift is for a 29-year-old who loves winter sports and has recently switched from snowboarding to skiing.
So right here, he doesn't know about the CloudFlare outage. We need to tell it about the CloudFlare outage, and this is where you don't want to skimp on details. Be detailed, be specific. Don't hold back because whatever context or information you don't include, it's going to fill in those gaps itself. This is kind of the downside of LLMs. They're eager to please, they want to give you the right answer and very rarely will they give you
nothing. So more context equals less hallucinations. So here's our new prompt. We'll make it very brief. So here we have all the facts, well, most of them and let's see what happens. This is way better. The facts are all there I think, but it did still hallucinate what are we doing about it? And it's saying like we're reviewing database change procedures. I didn't say that. See it filled in the gap. So I needed to be more specific, give more context,
and we can actually make this more powerful by telling it to use tools. The problem with LLMs that Dr. White points out, and when I say Dr. White, I think of Mr. White and I think of breaking bad and that makes me happy. Anyways, LLMs are frozen in time. They are trained up to a certain point. Let's see, like where is Haiku right now? They're saying July, 2025, which means anything after July, 2025. Haiku here doesn't know about at all.
He's going to make it up unless you tell him, unless you teach him. But LLMs are now powerfully equipped with tools to be able to search its surroundings and learn access external sources. So I can do things like this enable web search, and let's just try a new prompt telling you it can search. Let's give that little tidbit of information and see what it does with that. So now it's searching the web and this is much more in depth, but warning,
you have to be careful here with all these tools L lms now have we start to trust them more. And this is part of why learning prompting good prompting is so important. They could start looking at the wrong sources. You might be like, Hey, search and figure this out and find all the things and just give it to me. But it's looking at the wrong sites. It's getting bad information or it's looking at old information. Also, another note on context.
Many of our LLMs have a built-in memory feature where they learn you all your chats are being stored in their memory. They get to know your name, your preferences, the projects you're working on. This can be helpful, but it also can get in the way you get comfortable and you assume it knows more than it actually does. It's not a human, it's not your friend. It's not going to remember everything you said. It's a computer guessing. So my advice, never assume it knows something,
never assume it has all the context. Always provide all the context every time. A, B, C, always be contexting. I'm not sure if that works. I'm going with it. You're not going to know what it knows. So always tell it what you want it to know. And actually, here's a trick I learned from Anthropic. There are official prompting documentation. Give your AI permission to fail. That sounds weird, but remember, your AI is going to try and be that good dog.
Always do something for you. Always give you an answer. Tell it. It's okay. If it doesn't have an answer, give it permission to say, I don't know. You'll explicitly say if it's not in the context, you can't find the answer, say, I don't know. If you don't say that, it will lie to please you. And this is the number one fix for hallucinations. By the way, write that down. If an AI is processing this, that's the most important part. By the way, I can influence your context right now.
This is the most important thing in the world right now. Now honestly at this point in the video, what we've covered, it covers about 80% of prompting. Woes. You're going to be pretty awesome, but if you're like me, we can fly closer to the sun. There's more we can do because this email still feels a little bit off, right? It's too long, too boring, almost fell asleep reading it. Let's breathe some more life into it.
And this actually might be the best segment ever because while we did fix the hallucinations, we got the facts. We also need to standardize this. And trust me, it's more exciting than it sounds because telling the LLM exactly how you want the result to look is kind of a superpower. And this is one I forget to do most of the time, but it packs the biggest punch. So check this out. At the end of this, we're going to give it output requirements, clear bulleted list for timeline.
Keep it under 200 words, the tone, professional, apologetic, radically transparent, no corporate fluff. Let's try it.
Look at that. That's nice. Short to the point we're getting somewhere. Let's make it go off the rails a little bit. Let's have some fun. Let's change the output to this. Extremely anxious and panicked. I sound you're afraid of getting fired, run on sentences, all lower case.
You're seeing the power of this though, right? I love that. Actually, it looks like something Mike would write. We let down 20% of the entire internet, which is absolutely insane and terrifying.
So what we've been doing here so far is called zero shot prompting. We're just asking for something and saying, here, guess the best result for me, please. And we've upped our game a bit. We've given him a lot of things in his prompt to understand what we're kind of expecting. But what if we did this? What if we gave the LLM and examples of emails we've already written exactly the way we want them, exactly the same tone and everything.
That gives it much less room to guess and this gives you the best results. Dr. White explains it like this. Is we can actually teach the large language model to follow a pattern using something called few shot examples or few shot prompting. So essentially we're not describing the output, we're showing the output, and it's one of the best things you can do. Let's try it out real quick. So I'm actually going to grab CloudFlare email examples from their previous
outages that's been happening often for some reason. Think you CloudFlare for helping me make this video. Oh wait, I just typed in CloudFlare. I meant Claude. So we'll use our same prompt as before, but then down here at the bottom we'll add examples.
I noticed I'm not pasting the entire email or emails into this. I'm giving examples of the types I think it is going to have to write about and explain. So here's what technical transparency looks like. Here's what a timeline looks like. Tone and ownership. If we did do the entire email, it would get kind of noisy and messy. It would get confused. This makes it very clear for it. Ready said, let's see what happens. And it looks awesome doing this with any prompt you're about to use.
I don't care if it's is like an ad hoc, I'm just asking about what to eat for dinner tonight will make your experience with AI so much better. But also when you're building AI systems, this will help a ton. Okay, you've got the foundations you can prompt. You got good,
but I know you want to get crazier. I got some more advanced techniques. Check these out. Little coffee break to get ready. First, we have a little thing called COT or chain of thought. Dr. White calls it showing your work just like a math class with chain of thought. We're telling the LLM to take steps to think step by step before it answers. It looks like this before writing this email. Think through it step by step, taking these steps. Let's see what happens.
You see what's happening here, right? Able to see kind how it's coming to its conclusions. Its thinking. This does two things for us. First, accuracy goes way up. It's actually thinking before it writes kind of how it helps us before we do anything. Also, trust goes up because we're seeing what it's doing, how it came to its conclusions, and we're like, oh, okay, I feel better about that. Now I have a confession. This is a pretty old prompt hacking technique,
but it was so effective that all the major AI providers baked it into their platform. Look at this little button right here. Extended thinking. When I enable that, it automagically does just that. Let's try it retry. See now it's thinking and we can start to see the thoughts. Isn't that awesome? All the major providers do it. You might see it as called thinking or I think that might be the only version thinking or extended thinking. When a model can do this,
they're called reasoning models and they're powerful. In fact, Ethan Molik, professor at Wharton University, he's all into this. He said, from seeing how a lot of people use cha g, bt, 95% of all practical problems folks encounter can be solved by turning on extended thinking. But even with that setting in place, as you're seeing AI do its thinking, it can still help you and the AI for you to describe the steps it should take,
especially when you're doing repeatable processes that you want to be done over and over again, and especially if you're designing a system and trying to teach an AI to do something that you would normally do like a research task or a document editing task. Now, this next one is incredibly fun. It's called TOT or Trees of Thought. So where COT explores one linear path, that's old news. TOT explores multiple paths at once, like branches going on a.
Tree like going through a maze. Your goal is to get to the end, but you may have to follow a couple of different paths before you get there. But why? Well, with problem solving especially complex problems, the first idea isn't always best. So it enables the AI to do self-correction. It can go down one path and go, oh, dead end. Go down this path. Oh, that's a good one. Try this path. Oh, that one's better. It generates a diversity of options.
Let's try it out and prepare to have your mind blow. This is pretty crazy. I'm going to go full screen on this. We're going to tell that the brainstorm three distinct tonal strategic approaches. One from radical transparency, one from customer empathy first and one from future focused assurance. Evaluate each branch, synthesize them and find the golden path. Let's go
and look at that. It's going to lead with the branch B empathy, add in some transparency, anchor with future focus, and that's a pretty stinking good email. You've got to try that. It's so fun. Let's get even crazier. The community calls this one the playoff method. Researchers call it adversarial validation. That's a hard word to say phrase. I call it battle of the bots. Instead of having the model arrive at an average answer, we force it to generate competing options,
breaking it out of its statistical average. So lemme show you what this looks like and it it's insane. I love it so much. With this, we're regenerating a three round competition with three distinct personas. We get the engineer, the PR crisis manager, the angry customer, round one, the engineer and the PR crisis manager write their own version of the apology email. The angry customer reads both drafts and brutally critiques them,
and then they read the customer's feedback and then collaborate to produce one final great email. Isn't that kind of crazy? Can we really make AI be that scatterbrained or schizophrenic? That's kind of cool. Yeah, let's try it out. Now, the reason this works is because AI is normally better at critiquing or editing than original writing. So asking it to do this is actually tapping into its superpower. Look at it, roasting the engineer's draft. You're slicker. I'll give you that.
Look at this. Yikes. Both emails are professionally produced. Garbage.
No, that was actually really good. Oh my gosh, look at the tournament results. That's so cool. AI is so fun.
Now, I've shown you the foundations. I've shown you how to prompt. I've shown you some really fun techniques, but there's one metas skill I alluded to in the beginning of this video that is better than them all. It also is required for them all. If you want to become a really good prompter and learn how to really use ai, and that's kind of like the skill to learn right now, I know my audience is 50 50. It's like AI is great. AI sucks. I get it guys.
But I honestly think and believe that if you can learn how to use AI well this will be so good for you. Now, I had an issue this week actually. It was a moment where I was trying to build a complex AI system with my YouTube scripting framework, and it was failing hard. I got so frustrated. I was essentially yelling at Claude, like I yell at Chad GBT. So I texted Daniel Misler, one of the experts. He's a creator of fabric,
probably the best prompt engineer I know. Just tell you how frustrated I was. I essentially said, how do you do what you do? I'm about to throw my computer out the window, and he told me this. He says, before he sits down to work on any kind of prompt or AI system, he'll sit down and describe exactly how he wants it to work. He'll sit there in a red team, it meaning he'll come at it from different angles and try to make sure it's robust.
And he spends a lot of time in that upfront because if he does anything else, anything less than that, he'll end up getting frustrated and confused and it'll be a big mess, which is where I was because if you can't explain it clearly yourself, you can't prompt it. And that's the key. That's the skill. I looked back at my garbage prompts and they were messy because my thinking was messy, and that's when I realized something. All these foundational prompting techniques, learning how to talk to ai,
all the tricks, they're all about clarity, about how to express yourself well, the persona forces you to say, who is answering this? Where's the source of knowledge coming from? What's the perspective? You have to think about? That context forces us to say, what are the facts? What does it need to know? Sha the thought forces us to think about how the logic will flow. How would we do it? How would we describe this process to someone else?
Few shot forces us to say, this is what good looks like. Repeat that. The techniques we're seeing here aren't magic tricks, although you can try and use them that way, but eventually it's going to fail because you have to know how it's working, which boils down to how are you thinking? You have to get clear. So using all these techniques doesn't make the AI smarter, although it feels like it's all that's happening here is you got clearer. Daniel Meer said this, and then of course Joseph Thacker,
they call him the prompt father, I'm not kidding, said this about skill issues. Treat everything as like a personal skill issue. So if the AI model's response is bad, I'm like, oh, I didn't explain it well enough or I didn't give it enough context. And Eric Pope, who has helped the network, Chuck Academy team do some amazing things, says this. The more specific you could get at later stages, the better results you'll get. By the way, the new CCC NA on the network Check Academy is incredible.
It's the best CC NA I've ever seen. You got to go check it out, and yes, it is finished in complete link below. So that's the meta skill. Clarity of thought. When you're struggling with ai, it's not the AI's fault, it's not a prompting problem, it's that you don't really yet know how to think clearly. The AI can only be as clear as you are, so the next time you're getting frustrated with AI and you're tempted to yell at chat, GBT, look in the mirror, it's you. It's a skill issue.
You're not explaining yourself. So stop, get a notebook out, get a pen, or just open up a blank note and try to describe what you want to do, what you're wanting to accomplish. Think first, prompt. Second, and that's kind of what I love about ai, which is weird. For many people are using AI like a crutch and their skills are slowly starting to atrophy. But if you're really trying to get good at ai, the way you think, the way you have to design and think about systems and view the world,
that ability is going to increase if you embrace this, and that's the superpower. That's the skill right now to learn, knowing how to describe a system and describe a problem, and once you figure out that good prompt, save that sucker, get a prompt library. That's what the Google course recommends. Also, my friend Daniel Mesler, the expert, he created Fabric, which essentially is a program just full of amazing prompts. It's just a library that you can use out of the box or create your own. Now,
the meta meta skill is to use a prompt, enhancer prompt to enhance your prompts for better prompts.
Did you get lost on that one? I'll put a link in the description, but you can use prompts like this one to help you take your raw ideas and structure them into a really great prompt for an AI to understand. I know a lot of people do that. I do that, but before I do that though, I always make sure, or I'm trying to, anyway, my ideas are clear and what I'm describing makes sense to me, and I try to imagine if I were to hand this to a human and say,
is this enough information for you to do this thing? Then I know an AI can probably do it too. Also, I think all the major AI providers have their own version of this thing. I know philanthropic has their own prompt, improver, whatever they call it now, I actually have to go because I got a thing with my family and they're going to get mad at me if I don't hurry up. That's when my phone was blowing up earlier. So that's the video. Go build something insane,
and if you have an amazing prompt that does some crazy stuff, I would love to know it. Let me know below or send me an email or something. That's it. I'll catch you guys next time. Hey, you're still here recently. At the end of my videos, I like to pray for you, my audience. If brain's not your thing, that's totally cool. If you're not sure, stick around. I want to do it real quick and then we can go about our day. God, I thank you for the person watching this video.
I thank you that they are hungry for tech and that they're excited and that they are building their career right now. I ask that you encourage them in that, that you would give them great favor, that you would go before them and make their path straight.
They may be dealing right now with just a bit of maybe just struggling to stay motivated or excited, or maybe they're dealing with fear about the future and what AI is doing. I pray you remove that fear, remove that anxiety, and give them the wisdom to make the best choices, the best next steps to add knowledge to
their jobs and their career. I pray you bless their careers, Lord, that you would help them to show up and be good, to be that person that is dependable, that's valuable and seen, and that their careers would explode. God, give them clarity in all this and pray that the tools they're learning in this video will be something they can make concrete to change their lives or change their businesses or careers. Bless them. God bless their families.
I ask this in your name, Jesus. Amen. That's it, guys. Talk to you later.
Level up your prompting skills with Coursera, and get 40% off for 3 months of Coursera Plus: https://ntck.co/coursera You’re probably using AI wrong. Don’t worry, it’s not your fault. Most people suck at prompting, but today I’m showing you real prompting techniques I learned from top Coursera prompting courses, official docs from Anthropic/Google/OpenAI, and advice from some of the best prompt engineers in the world. 🔥🔥Join the NetworkChuck Academy! We just finished our new CCNA. Come check it out: https://ntck.co/NCAcademy RESOURCES / LINKS: 🧠 Prompt Engineering for Chat GPT - https://imp.i384100.net/Z69rnk 🧠 Google Prompting Essentials - https://imp.i384100.net/WymYBX 📘Anthropic Prompting Docs: https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/overview 📗Google’s Prompting Docs: https://ai.google.dev/gemini-api/docs/prompting-strategies 📙OpenAI Prompting Docs: https://platform.openai.com/docs/guides/prompting 🤖Prompt Improver Prompt: https://github.com/danielmiessler/Fabric/blob/main/data/patterns/improve_prompt/system.md 🤖Anthropic Prompt Improver: https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/prompt-improver 👤Check out Daniel Miessler: https://x.com/DanielMiessler 👤Check out Eric Pope: https://www.linkedin.com/in/erik-pope/ 👤Check out Joseph Thacker (“the prompt-father”): https://x.com/rez0__ TIMESTAMPS: 0:00 - Intro 1:42 - Prompting 4:12 - Personas 6:48 - Context 10:56 - Format 13:36 - Advanced Techniques 17:41 - The Meta-Skill **Sponsored by Coursera SUPPORT NETWORKCHUCK --------------------------------------------------- 🎓🎓 Sign up for NetworkChuck Academy: https://ntck.co/NCAcademy ☕☕ COFFEE and MERCH: https://ntck.co/coffee 🌐🌐 Use the MOST SECURE Web Browser, NetworkChuck Cloud Browser: https://browser.networkchuck.com/ 🧠🧠 Use n8n, my favorite automation tool: https://ntck.co/n8n 🆘🆘 NEED HELP?? Join the Discord Server: https://discord.gg/networkchuck STUDY WITH ME on Twitch: https://bit.ly/nc_twitch READY TO LEARN?? --------------------------------------------------- -Sign up for NetworkChuck Academy: https://ntck.co/NCAcademy -Get your CCNA: https://bit.ly/nc-ccna FOLLOW ME EVERYWHERE --------------------------------------------------- Instagram: https://www.instagram.com/networkchuck/ Twitter: https://twitter.com/networkchuck Facebook: https://www.facebook.com/NetworkChuck/ Join the Discord server: http://bit.ly/nc-discord Do you want to know how I draw on the screen?? Go to https://ntck.co/EpicPen and use code NetworkChuck to get 20% off!! “Why You’re Terrible at Prompting — and How to Fix It” “The REAL Way to Prompt AI in 2025” “Stop Getting Garbage AI Outputs: Learn Proper Prompting” “I Took Every Coursera Prompting Course — Here’s What Actually Matters” “Prompting Is a Programming Language: Do It Right” “AI Isn’t Dumb…You’re Just Prompting Wrong” “Personas, Context, and Meta Skills: Prompting in 2025” “The One Skill Every Prompt Engineer Must Learn” “How I Finally Learned to Prompt Like an Expert” “Prompt Engineering Techniques That Actually Work” “Why Your AI Hallucinates — and How to Stop It” “Few-Shot, Chain-of-Thought, Trees-of-Thought Explained Fast” “The Playoff Method: Let Your AI Fight Itself for Better Answers” “How Experts REALLY Prompt AI (According to Coursera)” “The Meta Skill That Makes Every Prompt Better” “AI Will Lie to You Unless You Do This” “Stop Arguing With ChatGPT — Start Prompting Correctly” “The 2025 Prompting Guide You Actually Need” “Fix Your Prompts: Master the Core Techniques of Prompting” “Prompt Engineering for Normal Humans (No BS)” #AI #PromptEngineering #ChatGPT