- Forward Future Daily
- Posts
- 🏫 Mastering AI Prompts: How To Get Better Results From ChatGPT and Midjourney
🏫 Mastering AI Prompts: How To Get Better Results From ChatGPT and Midjourney
Unlocking the Art of Prompting: Elevating Your AI Conversations.
Artificial intelligence is developing rapidly and has long been more than just a toy for technology enthusiasts. From creative design with MidJourney to complex text analysis with ChatGPT, Claude and Gemini - the application possibilities seem almost limitless. But one thing is often underestimated: the success of these tools stands and falls with the quality of the prompts. Prompting is the art of controlling those machines in such a way that they deliver relevant and precise results. The actor is a prompting engineer.
Why is prompting so important? Because language models like ChatGPT or image generators like MidJourney are not independent thinkers. They work purely probabilistically, analyzing millions of data points and generating results based on probabilities. Without clear, well-formulated instructions, the potential of these technologies remains untapped - or worse: the results are unusable, misleading or simply disappointing.
This is the challenge: a weak or unspecific task such as “Create a picture of a beautiful sunset” often leads to general and unimpressive results. A more precise prompt such as “Draw a sunset by the sea with warm colors and the silhouette of a lighthouse” improves the quality considerably. Similarly with the ChatGPT: while a vague “Explain climate change” provides a superficial answer, a detailed prompt such as “Summarize the main causes of climate change in three paragraphs and compare them with the solutions proposed at the last climate conference” provides deeper insights.
Prompting is therefore not only the language we use to communicate with machines, but also the key to exploiting their full potential. For this very reason, in this article we want to shed light on how to formulate precise prompts, what pitfalls to avoid and how to achieve better results through targeted techniques. We will look at practical examples of both good and bad prompting and work out why this is essential in the age of generative AI.
Because one thing is clear: the better the prompting, the more intelligent the AI appears. But behind every good AI result is ultimately still a human being who asks the right questions.
Why Prompting Is So Important
“But first, what are prompts? In the context of language models, a prompt is a chain of words, characters, and tokens that tells a language model what part of its enormous brain should be tapped to generate tokens, characters, and then words. Different segments of a language model's 'brain' are tuned for various functions. Some specialize in mimicking distinct writing styles, while others store vast knowledge about specific subjects.”
Good prompting is the basis for the performance of modern AI models such as ChatGPT and MidJourney. These tools are trained to generate content from large amounts of data, but without precise specifications, they lack focus. Prompting is not just a technical detail, but an essential interface between human and machine. Poor or unspecific prompts result in the AI responding but often missing the user's needs. This can make the difference between success and frustration.
A central problem is that AI models are context-sensitive, i.e. they are heavily dependent on the quality and clarity of the input. If queries are formulated too vaguely, the AI falls back on generic information. For example, a simple query such as “Create a picture of a forest” can deliver a fuzzy, standardized result. A more detailed query such as “Draw a dense coniferous forest in winter with snow-covered trees and a clear sky at sunset”, on the other hand, delivers a more precise, visually appealing result.
Another problem is that AI models are often unable to recognize the purpose or intent behind a vague query. This leads to misunderstandings or the generation of irrelevant or incomplete answers. For example, if you ask ChatGPT: “Explain quantum physics”, you will receive a rather superficial answer. However, if you specify: “Explain quantum physics in simple terms for children aged 10”, the answer will be more comprehensible and tailored to the target group. This shows how important it is to define the desired level of detail, the target group and the desired format (e.g. bullet points, tables or continuous text) in the prompt.
Good prompting also saves time. Precise prompting reduces the number of queries or corrections, as the AI delivers better results on the first attempt. This is particularly crucial for complex tasks such as the creation of business reports, technical documentation or creative projects. Effective prompts not only avoid frustration, but also allow the AI to perform to its full potential.
However, successful prompting requires practice and an understanding of how the models work. Here are some initial ground rules:
Clarity: the query should be clear and unambiguous.
Specificity: the more precise the description, the more precise the result
Context: The AI should be provided with sufficient background information.
Iterative approach: Results should be improved by refining the prompts where necessary.
Good prompting is not just a technical skill, but a way of thinking that aims to ask the right questions, formulate them precisely and work towards results in a targeted manner. Those who master this have a clear advantage in both creative and analytical tasks.
In the following, however, we want to go beyond these four basic rules and use a table we have created ourselves to illustrate the key aspects of good prompting using examples.
Examples of Good Prompting With LLMs
“At its core, talking to a model is very similar to talking to a person. Clear instructions and providing all relevant information are crucial to achieving the desired results.”
Using the following table, I have worked out some basic rules that can be used to achieve much better results when working with LLM. On the left is the category that improves prompting, in the middle is an example and on the right is the more precise implementation.
Topic | Example | Explanation |
Clarity | “Explain the difference between stocks and bonds in three short paragraphs.” | A clear and unambiguous task ensures the language model knows exactly what to deliver. |
Context | “I’m writing a beginner’s financial guide. Please explain the role of interest rates, referencing current rates in Europe.” | Providing context (e.g., beginners, current rates) allows the model to tailor its response to the purpose of the text. |
Specificity | “Provide a step-by-step guide for applying to a bachelor’s program in Germany, including all required documents.” | The more detailed the prompt, the more specific the response. Specificity prevents superficial or overly general results. |
Style and Tone | “Draft an email to a potential business partner in a professional but friendly tone. Topic: collaboration on an environmental project.” | The AI can adapt its style, tone, and level of formality when it knows how the text should sound. |
Target Audience Definition | “Explain the basics of quantum physics to a 12-year-old student.” | Defining the target audience (e.g., a 12-year-old) helps the model adjust the language and presentation to ensure clarity. |
Formatting | “Summarize the key points in a numbered list and provide a brief conclusion.” | Specifying the desired format (list, table, plain text) ensures the output matches your requirements. |
Iteration | “The previous draft was too shallow. Please add more technical details and include a cost calculation.” | Iterative fine-tuning allows refining the output by addressing ambiguities or requesting additional details to match expectations. |
Constraints (Setting Limits) | “Summarize this text in no more than 100 words and avoid technical jargon.” | Setting clear limits (e.g., word count, language style) ensures the output meets specific requirements without being too lengthy or complex. |
Perspective / Role | “Act as if you are an experienced tax advisor. Explain how to file a tax return for freelancers correctly.” | Assigning the model a specific role provides more focused and context-sensitive answers based on that perspective. |
Comparison / Contrast | “Compare Bitcoin and Ethereum in terms of technology, use cases, and scalability.” | Prompts that request comparisons help the model analyze different aspects and provide a nuanced response. |
Reference to Sources | “Summarize the article at https://example.com and provide key figures from the text.” | Referring to external sources enables the model to summarize and highlight essential data and facts. |
Specific Question | “What factors lead to inflation, and how can a central bank counteract it?” | Well-defined questions help the model provide a more focused and in-depth answer, avoiding overly broad topics. |
In addition to these categories, it should be noted that an iterative approach can also lead to a significant improvement in the LLM output. Repeatedly inputting the output with additional information about what the output is missing helps the AI model to better achieve the desired goal.
“Iteration and testing also play an important role in refining prompts. By repeatedly inputting, checking the output and adjusting the prompt, performance can be optimized step by step. The willingness to "do it again and again" is what characterizes the best prompt engineers, the team notes. Further advice from the Anthropic experts: Give the model sufficient context and background information instead of simplifying things, as modern models can process complex information. Instructions should also be provided for edge cases and unexpected inputs. However, when developing prompts, it is important not to focus too much on edge cases, but to reliably cover the base cases first. If you want the model to learn a particular task, you can try to give it the relevant papers or instructions, rather than trying to cram everything into the prompt.”
In part 2, we take a look at the right prompting for image models and the limitations of reasoning models. Although reasoners are generally considered to be becoming more and more important, they are primarily domain-specific. In this second part, we will examine why this is the case!
—
Get more content from Kim Isenberg—subscribe to FF Daily for free!
![]() | Kim IsenbergKim studied sociology and law at a university in Germany and has been impressed by technology in general for many years. Since the breakthrough of OpenAI's ChatGPT, Kim has been trying to scientifically examine the influence of artificial intelligence on our society. |
Reply