Good morning, it’s Monday. Hope you had a restful weekend. As always, we’re kicking off with stories you might’ve missed—like why Dan Hendrycks says rebooting the Manhattan Project for AI is like trying to defuse a bomb with a flamethrower (yikes), and how AlphaFold is quietly rewriting the future of medicine.
Plus, in the latest edition of I Will Teach You to AI, we’re getting creative with Midjourney—exploring how to create consistent, dynamic characters across different scenes.
đź“Š MARKET PULSE
xAI Acquires X in $45 Billion All-Stock Deal
Elon Musk’s artificial intelligence venture, xAI, has acquired the social media platform X (formerly known as Twitter) in an all-stock transaction valued at $45 billion. This deal assigns X a valuation of $33 billion, accounting for $12 billion in debt, and values xAI at $80 billion. 
Musk emphasized the strategic importance of this merger, stating that xAI and X’s futures are “intertwined.” → Continue reading here.
🗞️ ICYMI RECAP
Top Stories You Might Have Missed
📅 AI May Shrink Workweeks, Say Tech Titans: Bill Gates, Elon Musk, and JPMorgan’s Jamie Dimon predict AI could lead to 2–3 day workweeks—assuming it doesn’t take your job first. More free time, longer lives, and fewer hours could define the AI-powered future of work.
💸 AI Boom Creates $71B in Billionaires: From coding tools to humanoid robots, 29 AI founders have skyrocketed to billionaire status in record time. Startups like OpenAI, Anthropic, and Figure are raking in funding—even without products or revenue in some cases.
💰 EU Commits $1.4B to AI and Cybersecurity: The European Commission will invest €1.3B from 2025–2027 to advance AI, bolster cybersecurity, and improve digital skills—part of a broader push to secure Europe’s tech sovereignty and reduce dependence on foreign tech.
📜 OpenAI Loosens Image Rules in ChatGPT: ChatGPT can now generate images of public figures, racial features, and even hateful symbols in “educational” contexts—part of a shift toward fewer refusals and more user control amid rising political and regulatory pressure.
🧳 Google’s AI Now Plans Trips From Screenshots: New updates let Google Maps scan your travel screenshots and pin locations automatically, while Search’s AI builds full itineraries—flights, hotels, and all—on the fly. It's travel planning without the spreadsheet chaos.
⚠️ US Robot Firms Urge National Strategy: Tesla, Boston Dynamics, and others warn the U.S. risks losing both the robotics and AI races to China without a federal strategy, funding, and leadership—calling for tax incentives, research backing, and a dedicated robotics office to stay competitive.
🤨 Columbia Student Builds Viral AI Cheat Tool: Roy Lee was suspended after using his AI app to ace big tech interviews—and bragging about it online. Despite backlash, his tool now makes $170K/month, fueling debate over AI’s role in hiring and the fairness of coding assessments.
📉 NVIDIA Stock Slips on Inflation, China Fears: Shares fell 1.2% to $110.10 amid inflation concerns, CoreWeave’s lukewarm IPO, and risks from U.S. export curbs that could impact 10% of NVIDIA’s China-linked revenue—though analysts still call it a buy with a $200 target.
🧑‍🏫 FORWARD FUTURE PRO
The world of artificial intelligence has made enormous progress in the field of image generation in recent years. Tools like Midjourney now make it possible for people without in-depth graphic design knowledge to create stunning visual content.
Today, I'm going to walk you through my process for using Midjourney to take an existing character - in our case, our ForwardFuture astronaut, Astro - and place them in different poses and scenarios. You'll learn how to use precise prompt formulations to change a character's posture, mood and dynamics while maintaining the basic character style. This technique is especially valuable for storytellers, content creators and anyone who needs consistent character portrayals across different scenes. → Continue Reading Here.
🗺️ GEOPOLITICS
Dan Hendrycks Warns Against a U.S. Manhattan Project for AI
The Recap:
AI safety expert Dan Hendrycks argues that modeling a U.S. effort to develop superintelligent AI after the original Manhattan Project is both unrealistic and dangerous. In a guest essay for The Economist, he outlines why the conditions that enabled the atomic bomb’s creation no longer apply and warns such a project would provoke geopolitical escalation. Hendrycks, executive director of the Center for AI Safety, advocates for strategic deterrence over a reckless race toward uncontrollable AI systems.
The original Manhattan Project relied on extreme secrecy, which is no longer feasible due to modern surveillance, cybersecurity vulnerabilities, and public research communities.
Hendrycks argues a classified AI program would exclude critical talent, including many foreign-born researchers, weakening U.S. capabilities.
Superintelligent AI, if developed, could destabilize nuclear deterrence by enabling new forms of warfare, such as locating nuclear submarines or building anti-ballistic systems.
The fastest path to superintelligence—recursive AI self-improvement—is likely uncontrollable and could compress decades of progress into a short, chaotic burst.
“There is not a good track record of less intelligent things controlling things of greater intelligence,” warns AI pioneer Geoffrey Hinton, quoted in the piece.
A U.S. superintelligence project would trigger aggressive countermeasures from rivals like China and Russia, who would see it as a threat to national survival.
Instead, Hendrycks, along with Eric Schmidt and Alexandr Wang, recommends deterrence strategies including cyber-disruption, intelligence gathering, and supply-chain resilience.
Forward Future Takeaways:
Hendrycks’ essay is a sharp reminder that the path to advanced AI must account for modern geopolitical and technical realities, not historical analogies. His argument reframes AI strategy not as a race to be won, but as a domain to be stabilized—through deterrence, infrastructure resilience, and control. As AI edges closer to transformative capabilities, should national ambition give way to global coordination? → Read the full article here.
🧬 MEDICINE
AlphaFold and the AI-Driven Revolution in Drug Discovery
The Recap: Samuel Hume argues that AlphaFold, the protein-structure-predicting AI from Google DeepMind, could usher in a new golden age of medicine. Since its 2020 debut, AlphaFold has transformed biological research by replacing years-long lab work with near-instant structural predictions — enabling discoveries in cancer, genetic disease, and drug design. Hume, a PhD researcher in oncology at Oxford, positions AlphaFold as the centerpiece of a new, AI-led generation of medical innovation.
AlphaFold uses AI to predict the 3D structure of proteins from amino acid sequences, replacing traditional methods like X-ray crystallography that can take years.
Since its 2020 launch, AlphaFold has produced a database of 250 million protein structures, accessed by nearly 2 million users across 190 countries.
The technology has been credited with helping solve the nuclear pore complex, a decades-old biological mystery relevant to cancer and aging.
In one study, researchers used AlphaFold to test 1.6 billion molecules against the serotonin receptor, yielding potential new treatments for mood disorders.
AlphaFold contributed to the discovery of a new liver cancer drug (tested in lab models) and to designing a protein-based “molecular syringe” for drug delivery.
Startups like AlphaProteo use AlphaFold’s predictions to create novel therapeutics for Covid-19, cancer, and autoimmune diseases.
DeepMind has also released AlphaMissense, which models genetic mutations' impact on protein structure to improve diagnosis of rare diseases.
Forward Future Takeaways:
AlphaFold represents a leap in how we understand and manipulate biological systems, collapsing timelines and democratizing access to molecular insights. Its ripple effects — from precision diagnostics to targeted drug design — could reshape both pharmaceutical R&D and our approach to disease itself. The open question now: can AI-designed drugs deliver in human trials, or will biology prove tougher to disrupt than data? → Read the full article here.
🤠THE DAILY BYTE
AI Uncovers Cosmic Bubbles: Deep Learning Pops Clues About Star Formation
Japanese researchers have developed a deep learning model that utilizes AI image recognition to detect previously unidentified bubble-like structures in our galaxy. By analyzing data from the Spitzer and James Webb Space Telescopes, the model efficiently identifies these “Spitzer bubbles,” which are formed during the birth and activity of high-mass stars. Additionally, the AI detected shell-like structures believed to result from supernova explosions. This advancement not only enhances our understanding of star formation but also sheds light on the dynamic events shaping galaxy evolution. 
That’s a Wrap!
❤️ Love Forward Future? Spread the word & earn rewards! Share your unique referral link with friends and colleagues to unlock exclusive Forward Future perks! 👉 Get your link here.
📢 Want to advertise with Forward Future? Reach 450K+ AI enthusiasts, tech leaders, and decision-makers. Let’s talk—just reply to this email.
đź“Ą Got a hot tip or burning question? Drop us a note! The best reader insights, questions, and scoops may be featured in future editions. Submit here.
🛰️ Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.
Thanks for reading today’s newsletter—see you next time!
🧑‍🚀 🧑‍🚀 🧑‍🚀 🧑‍🚀
Reply