- Forward Future Daily
- Posts
- đŤ Mastering AI Prompts | Part 2: Examples of Good Prompting With LLMs
đŤ Mastering AI Prompts | Part 2: Examples of Good Prompting With LLMs
Unlocking the Power of AI with Better Prompts
When applying the prompting techniques to image models such as Midjourney, DALL-E, Ideogram etc., some methods such as âclarityâ or âspecificityâ are similar. However, other categories such as âlightningâ and âcompositionâ are, as expected, image-specific. Basically, it can be said that some patterns in prompting for LLMs are similar and that there are fewer areas in which prompting with image models is an exception. However, the following table also shows the special features of image prompting.
Topic | Example | Explanation |
Clarity | âCreate an image of a futuristic city.â | A clear description avoids confusion and ensures the model understands the basic concept. |
Specificity | âGenerate an image of a futuristic city with flying cars, skyscrapers, and vibrant neon lights.â | Adding more specific details provides a clearer vision and reduces ambiguity in the output. |
Style and Mood | âCreate a dark, mysterious forest with a surreal and dreamlike atmosphere.â | Specifying the desired mood or style helps set the emotional tone of the image. |
Context and Setting | âDepict a medieval castle surrounded by mountains at sunset.â | Providing a clear context helps the model understand the environment and background elements. |
Composition | âCompose an image of a lone tree in the center, with rolling hills in the background.â | Including details about the arrangement of objects guides the model in creating a balanced image. |
Lighting | âIllustrate a portrait under soft, natural light coming from a window on the right.â | Specifying lighting conditions ensures the image conveys the intended visual atmosphere. |
Color Palette | âGenerate an image dominated by warm tones, with shades of orange, red, and yellow.â | Defining the color palette helps maintain consistency in the aesthetic and visual impact. |
Artistic Medium | âCreate an image resembling a watercolor painting of a serene landscape.â | Mentioning the desired artistic medium guides the model to mimic specific styles or techniques. |
Era/Time Period | âDepict a Victorian-era street scene with horse-drawn carriages and period clothing.â | Referencing a time period gives the model historical context for creating era-appropriate elements. |
Perspective | âGenerate an image from a birdâs-eye view of a bustling city center.â | Describing the desired perspective determines the vantage point of the scene or subject. |
Detail Level | âCreate a highly detailed image of a mechanical watch, showing its inner workings.â | Specifying the level of detail ensures the image meets the desired complexity and focus. |
Use of Keywords | âInclude keywords like âhyper-realistic,â âfantasy,â and âsteampunkâ to refine the output.â | Using relevant keywords helps refine the image and ensures the output aligns with expectations. |
As you can imagine, image prompting is more about visually presenting the idea. It's best to try to visualize an image in advance, to take a perspective from which you can see the visualized image, and to include as many details as possible. You then transfer this idea as a prompt into the image model.
Not All Models Are Equally Suitable for Similar Promotions
Once youâve stuffed the model with as much context as possible â focus on explaining what you want the output to be. With most models, weâve been trained to tell the model how we want it to answer us. e.g.âYou are an expert software engineer. Think slowly + carefullyâ This is the opposite of how Iâve found success with o1. I donât instruct it on the how â only the what. Then let o1 take over and plan and resolve its own steps. This is what the autonomous reasoning is for, and can actually be much faster than if you were to manually review and chat as the âhuman in the loopâ.
Although reasoning models such as o1 are currently receiving a lot of attention and also provide excellent results, they are not necessarily better suited for all tasks. There is no doubt that o1 using the Chain on Thought method performs significantly better in math, coding and other domains where the correctness of the results can be clearly verified, while it is subject to regular LLMs such as GPT4o in creative tasks. In the following, I would therefore like to take a closer look at reasoning models and emphasize their special features (also with the prompting).
The straightforward, intuitive answer to the first question is that inference scaling is useful for problems that have clear correct answers, such as coding or mathematical problem solving. In such tasks, at least one of two related things tend to be true. First, symbolic reasoning can improve accuracy. This is something LLMs are bad at due to their statistical nature, but can overcome by using output tokens for reasoning, much like a person using pen and paper to work through a math problem. Second, it is easier to verify correct solutions than to generate them (sometimes aided by external verifiers, such as unit tests for coding or proof checkers for mathematical theorem proving).
In contrast, for tasks such as writing or language translation, it is hard to see how inference scaling can make a big difference, especially if the limitations are due to the training data. For example, if a model works poorly in translating to a low-resource language because it isnât aware of idiomatic phrases in that language, the model canât reason its way out of this. (...) Tasks where o1 doesnât seem to lead to an improvement include writing, certain cybersecurity tasks (which we explain below), avoiding toxicity, and an interesting set of tasks at which thinking is known to make humans worse.
Together with other scientists, Benedikt Stroebel has therefore created a website on which you can check in tabular form in which areas reasoning models (here o1) are really better than GPT4o. And as was to be expected from the quote above, it shows that reasoning is worse than 4o, especially in creative areas.
Exemplary excerpt from the website: https://benediktstroebl.github.io/reasoning-model-evals/
Ultimately, this means that in addition to the right prompting, a lot also depends on the type of model itself. Even if the trend is towards ever better reasoners, these only improve on a domain-specific basis. In their domain, however, good and correct prompting is in turn suitable for achieving better results in this domain:
The correct use of reasoning models therefore requires a very differentiated and precise definition of objectives, indications and context. Ben Haylak has created a good chart on X:
from @benhylak
Reasoning models, such as o1, are therefore ideal for complex data analyses, creating hypotheses and checking logical connections in science. They can efficiently structure large amounts of data, check lines of reasoning for weaknesses or bring together different scientific perspectives. They are particularly useful in interdisciplinary fields, where they can help build bridges between subject areas by recognizing patterns or analogies that a single scientist might miss.
Mastering the art of prompting is essential for unlocking the full potential of modern AI systems, whether for language models like ChatGPT or image generators like MidJourney. Clear, specific, and context-rich prompts transform AI from a generic tool into a precision instrument capable of delivering targeted, high-quality results. As demonstrated throughout this discussion, the quality of AI outputs hinges on the quality of the input prompts. Poorly formulated prompts lead to vague or irrelevant results, while well-crafted prompts provide clarity, purpose, and refinement.
In science, reasoning models excel at tasks requiring logical rigor, data analysis, and hypothesis validation. They highlight the nuanced relationship between prompt design and model performance, showing that success depends not just on the prompt itself, but also on understanding the strengths and limitations of the chosen model. The right approach to prompting, combined with iterative refinement, enables users to bridge gaps between disciplines, improve efficiency, and drive innovation.
In conclusion, effective prompting is both an art and a science, requiring creativity, precision, and a deep understanding of the modelâs capabilities. Those who master this skill will not only enhance their interaction with AI but also gain a strategic advantage in leveraging its transformative power.
â
Get more content from Kim Isenbergâsubscribe to FF Daily for free!
![]() | Kim IsenbergKim studied sociology and law at a university in Germany and has been impressed by technology in general for many years. Since the breakthrough of OpenAI's ChatGPT, Kim has been trying to scientifically examine the influence of artificial intelligence on our society. |
|
Reply