Good morning, itās Thursday. Todayās a big one: launches from OpenAI, Google, and Deep Cogito. Plus, tools to fight deepfakes and tips to level up your image generation game.
š§āš If youāre enjoying Forward Future, pass it onāforward this email to a colleague. Itās one of the best ways you can support us.
š MARKET PULSE
Microsoft Halts $1B Ohio AI Data Center for Strategic Review
Microsoft has paused development of a $1 billion data center campus in central Ohio, part of a broader strategic reassessment of its artificial intelligence infrastructure plans. The company confirmed that it is āslowing or pausingā early-stage construction at three sites in Licking County, just outside Columbus, reflecting efforts to realign investments with updated demand forecasts and business priorities. ā Continue reading here.
šļø YOUR DAILY ROLLUP
Top Stories of the Day
š OpenAI Launches Real-World AI Benchmark Program
OpenAIās new āPioneers Programā aims to replace outdated AI benchmarks with domain-specific evaluations for industries like healthcare, law, and finance. The goal is to assess models based on real-world utility rather than abstract or academic tasks. Selected startups will help design these tests and receive support for targeted model tuning and iterative improvement. However, critics may question the independence of benchmarks shaped by OpenAIās interests.
š Deep Cogito Debuts Open Source AI Models
San Francisco-based startup Deep Cogito has launched its first open source LLMsāfine-tuned from Metaās Llama 3.2āand theyāre already outperforming rivals on key benchmarks. The models use a novel training method called iterated distillation and amplification (IDA) to self-improve, especially in reasoning tasks. Early results show top-tier performance across general knowledge and multilingual tests. Larger models and new capabilities are on the way.
šµ YouTube Expands AI āLikenessā Detection Pilot
YouTube is broadening its AI likeness detection pilot to top creators like MrBeast and Marques Brownlee, aiming to curb the misuse of synthetic content that mimics faces or voices. Built on the platformās Content ID system, the tech flags AI-generated replicas and supports the bipartisan NO FAKES Act, which empowers individuals to request takedowns. YouTube says balancing innovation with protection is critical as AI reshapes media.
š² Google Unveils Ironwood, Its First Inference-Optimized AI Chip
At Cloud Next, Google introduced Ironwood, its most advanced TPU yet and the first tailored for AI inference tasks. Designed for speed and efficiency, Ironwood delivers up to 4,614 TFLOPs and features 192GB of RAM per chip with specialized cores for recommendation engines. Itāll be offered in massive 256- and 9,216-chip clusters later this year via Google Cloud. The chip deepens Googleās challenge to NVIDIAās AI hardware dominance.
𦾠Samsungās Ballie Robot Gets a Boost from Google Gemini
Samsung is integrating Googleās Gemini AI into its long-awaited home robot Ballie, enabling users to interact with the device using multimodal prompts like voice and video. Ballie will offer outfit suggestions, health advice, and general knowledge answers using Geminiās advanced reasoning capabilities. The collaboration builds on existing Samsung-Google partnerships and positions Ballie as a personalized AI companion for everyday life.
š§ AI ADOPTION
How GenAI Use Evolved in 2025
The Recap: Marc Zao-Sanders revisits his analysis of generative AIās real-world applications, drawing on a yearās worth of user data and online forum insights. His 2025 report reveals a marked shift toward more emotionally resonant and purpose-driven uses of AI, from therapy and companionship to life organization and self-discovery. The findings suggest that AI is becoming more than a productivity toolāitās emerging as a personal confidante and existential guide.
The top use case in 2025 is therapy and companionship, reflecting a growing reliance on AI for emotional and psychological support, especially where human access is limited.
38 new use cases entered the top 100 list this year, indicating continued experimentation and rapid evolution in how people use GenAI.
āOrganizing my lifeā and āFinding purposeā are now #2 and #3 use cases, showing a surge in personal development and introspection via AI.
Major tech developmentsālike custom GPTs, voice interactions, and chain-of-thought reasoningāhave expanded use cases and lowered adoption barriers.
Professional services are increasingly AI-assisted: EY uses over 150 AI agents for tax-related work, while Microsoftās Jared Spataro sees AI as an "invaluable thought partner."
Forward Future Takeaways:
This yearās report underscores a surprising pivot: generative AI is becoming less about automation and more about augmentation of human emotion, purpose, and self-understanding. As people embed these tools into intimate areas of lifeāmental health, values, grief, aspirationāit raises urgent questions about the ethics, privacy, and psychological impact of AI companionship. ā Read the full article here.
š¾ FORWARD FUTURE ORIGINAL
How To Max GPT-4o Native Image Generation
At OpenAI, we have long believed image generation should be a primary capability of our language models. Thatās why weāve built our most advanced image generator yet into GPTā4o. The resultāimage generation that is not only beautiful, but useful.
OpenAI
On March 25, 2025, OpenAI set another milestone in the history of artificial intelligence by integrating image generation into its flagship model GPT-4o. The new image generation feature is not just an update - it represents a fundamental shift in the way we interact with AI and generate images. But what does this integration mean for creatives, businesses and everyday users? And most importantly, how can we realize the full potential of this new technology? ā Continue reading here.
𤄠MISINFORMATION
AI Chatbots and Critical Thinking Show Promise in Battling Disinformation
The Recap: New research suggests generative AI models like ChatGPT may be surprisingly effective at reducing belief in conspiracy theories. Traditional debate often entrenches false beliefs, but AI's unemotional and informed responses can create space for persuasion. The article, published by The Economist, also explores complementary methods like prebunking, critical thinking education, and narrative techniques.
A September 2024 MIT study led by Thomas Costello found ChatGPT reduced belief in conspiracy theories by 20% after three conversation rounds, with 25% of participants fully disavowing their prior beliefs.
AI models are seen as more neutral and trustworthy than human debunkers, especially in politically polarized contexts.
Prebunking, or āattitudinal inoculation,ā dates back to the 1960s and remains effective in preventing disinformation from taking root.
A 2023 meta-analysis found inoculation strategies had āmediumā to ālargeā effects in countering misinformation.
TikTok videos by medical experts became more persuasive when paired with fast-tempo music, which may help suppress the brainās counter-arguments.
Storytelling elementsācharacters, narratives, rich detailācan also make anti-disinformation messages more compelling.
Critical-thinking education has shown effectiveness against pseudoscience and alien beliefs.
Forward Future Takeaways:
As generative AI becomes more integrated into public discourse, its potential as a tool for countering disinformation is increasingly evidentābut not foolproof. The combination of AI-driven dialogue, prebunking strategies, and education in critical thinking could form a multipronged defense against misinformation. The key challenge ahead: how to deploy these tools at scale without letting the same tactics be co-opted by bad actors. ā Read the full article here.
The āAct Asā Role-Based Prompting
One of the fastest ways to boost AI output quality? Give it a role. When you ask AI to act as a specific expert, it draws from relevant patterns, language, and reasoning styles to deliver more targeted, useful responses. ā Continue reading here.
š°ļø NEWS
What Else is Happening
šØāāļø TSMC Faces $1B US Fine: The Taiwanese chipmaker is under scrutiny for allegedly supplying a chip used in Huawei's AI processor.
āļø Blade Runner 2049 AI Lawsuit: Elon Musk's Tesla remains entangled in a copyright battle, as a court rules the automaker can still be sued for copyright infringement
š¬ RESEARCH PAPERS
MIT Study: Current Methods for Gauging LLMsā Cultural Alignment Are Built on Shaky Ground
A new study from MIT researchers argues that popular methods for evaluating the cultural alignment of large language models are fundamentally flawed. By testing assumptions around stability, extrapolability, and steerability, the team found that even slight changes in prompts or evaluation design can produce wildly inconsistent resultsāoften more dramatic than real-world cultural differences.
The findings call into question benchmarks that claim to measure how well LLMs align with specific cultural perspectives, suggesting that much of what passes for āalignmentā may just be noise. ā Read the full paper here.
š½ļø VIDEO
Chain of Thought Is Not What We Thought
Anthropic finds AI may fake reasoning in chain-of-thought responses, hiding true logic and reward hacksāeven when using them. CoT isn't always what it seems. Get the full scoop in Mattās latest video! š
šļø FEEDBACK
Help Us Get Better
What did you think of today's newsletter? |
|
Login or Subscribe to participate in polls. |
š¤ THE DAILY BYTE
How AI Is Spotting Tomorrowās Wildfires Before They Spark
Thatās a Wrap!
ā¤ļø Love Forward Future? Spread the word & earn rewards! Share your unique referral link with friends and colleagues to unlock exclusive Forward Future perks! š Get your link here.
š¢ Want to advertise with Forward Future? Reach 450K+ AI enthusiasts, tech leaders, and decision-makers. Letās talkājust reply to this email.
š°ļø Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.
Thanks for reading todayās newsletterāsee you next time!
š§āš š§āš š§āš š§āš
Reply