Good morning, itās Monday. The weekendās over, the coffeeās brewing (or steeping, if youāre one of the cool kids), and itās time for your friendly roundup of weekend stories.
Meanwhile, in our latest Forward Future Original series, weāre exploring how test-time compute is giving AI a bit of a brain boostāthink dynamic reasoning and real-time adaptability, all in one.
šļø ICYMI RECAP
Top Stories You Might Have Missed
š¤ Perplexity Proposed Merger with TikTok U.S.
Amid national security concerns that once threatened a U.S. ban, Perplexity AI submitted a proposal to merge with TikTok's U.S. operations. The bid seeks to create a new entity combining both companies, allowing ByteDance's existing investors to retain their stakes. The move came as President-elect Donald Trump considered granting TikTok a 90-day extension to finalize a divestiture plan. (As of Sunday, the ban has been postponed.) The proposed merger highlights ongoing efforts to address data privacy concerns while preserving TikTok's access to its 170 million American users.
š Apple Pauses Faulty AI News Summaries
Apple has temporarily shelved its AI news summaries after backlash over inaccuracies, including a false BBC alert about a shooting suspect. Critics, including media groups, warned of misinformation risks and trust erosion. Apple plans improvements while marking AI-generated summaries in other apps. The incident underscores the broader issue of AI hallucinations, where even industry leaders face challenges in ensuring accuracy.
š Microsoft's 2025 Strategy: All-In on AI
Microsoft is doubling down on AI with three moves: launching CoreAI, a new engineering group led by ex-Meta exec Jay Parikh, to drive cutting-edge solutions; introducing pay-as-you-go Copilot Chat agents for businesses; and integrating premium AI features into Microsoft 365 subscriptions alongside price hikes. Despite internal concerns, Microsoftās vision to automate human tasks into scalable AI tools positions it as a leader in the next tech wave.
š Mira Murati's AI Startup Attracts Top Talent
Former OpenAI CTO Mira Murati has recruited Jonathan Lachman, OpenAIās ex-head of special projects, for her stealth AI startup focused on AGI. With 10+ hires from AI giants like Google DeepMind, the still-unnamed venture reflects growing competition among ex-OpenAI leaders launching ambitious projects. Muratiās bold move follows her dramatic exit amid OpenAIās leadership shakeup, signaling her intent to shape the next chapter in AI innovation.
š° Shield AI Hits $5 Billion Valuation After $200M Raise
Shield AI, specializing in autonomous aircraft software, secured $200 million, doubling its valuation to $5 billion. Backed by Palantir, Airbus, and Andreessen Horowitz, its "Hivemind" tech enables GPS-free, autonomous drones. As U.S. defense budgets surge amid global tensions, Shield AI exemplifies Silicon Valleyās growing role in military innovation, challenging traditional defense contractors for a slice of the $850 billion Pentagon budget.
š¤ Nord Security Founders Launch Nexos.ai for Scalable AI
Tomas Okmanas and Eimantas Sabaliauskas, founders of Nord Security, introduce Nexos.ai to help enterprises transition AI projects from pilot to production. Offering tools to manage costs, security, and scalability, it supports 200+ AI models from providers like OpenAI. Backed by $8M from Index Ventures, Nexos.ai addresses enterprise AI hesitancy with compliance-focused features and aims to redefine scalable AI infrastructure ahead of its March launch.
š
āāļø Top 5 AI Prompting Mistakes and Fixes
Generative AI tools shine when prompted effectively, but common mistakes can derail results. Ambiguous prompts and unspecified formats often lead to irrelevant answers. Reusing sessions without resetting creates confusion, while skipping iteration misses opportunities for refinement. Unrealistic expectations about AIās capabilities also fuel frustration. By focusing on precise and adaptable prompts, users can unlock the full potential of tools like ChatGPT, Copilot, and Gemini.
āļø POWERED BY SAMBANOVA
Speed matters, but GPUs arenāt set up to deploy AI workloads quickly. Thatās where AI accelerators come in. On SambaNova Cloud, you can access open-source powerhouses like Llama 3.3 70B at a breakneck inference speed of 400 tokens/second ā getting you anywhere you want to go.
š¾ FORWARD FUTURE ORIGINAL
What Is Test-Time Computeāand Why Does It Matter?
ā
āEnabling LLMs to improve their outputs by using more test-time computation is a critical step towards building generally self-improving agents that can operate on open-ended natural language.ā
Google, Arxiv-Paper
In recent years, the rapid development of artificial intelligence has produced increasingly powerful models that are used in a wide range of applications from speech processing and image recognition to autonomous driving. The success of many systems is based on the immense computing power required to train such models. Until now, the focus has been on maximizing efficiency and accuracy during the training phase. However, pre-training in particular, i.e. scaling with more and more data, is beginning to come to an end, as even greats like Ilya Sutskever say (āBut pre-training as we know it will unquestionably endā).
The reason for this is that the internet only offers a finite amount of high-quality data for training AI models. Sutskever compares data to fossil fuels that will eventually run out. He talks about āpeak dataā, the point at which the amount of available data stops growing. We are also faced with the problem that although computing power continues to increase, adding more data no longer leads to proportional increases in the performance of AI models. This means that simply increasing the amount of data no longer brings significant improvements.
However, a new methodology is increasingly coming to the fore: test-time compute (TTC). Test-time compute refers to the process of using additional computing resources during the inference phase - the phase in which a trained model is applied to new data - in order to achieve better results. Traditionally, the training and inference phases have been strictly separated: while training is typically performed on high-performance computers with expensive GPUs or TPUs, inference is often performed on devices with limited computing power such as smartphones or embedded systems. ā Continue reading here.
š§¬ LONGEVITY AI
OpenAI Steps into Longevity Science with Protein Engineering AI
The Recap: OpenAI has unveiled a specialized AI model called GPT-4b micro, designed to optimize protein engineering for scientific discovery. The model successfully enhanced Yamanaka factorsāproteins critical for cell rejuvenationāmaking them significantly more effective in turning ordinary cells into stem cells, a potential breakthrough for longevity research.
OpenAI partnered with Retro Biosciences, a longevity research startup, to develop GPT-4b micro for protein reprogramming tasks.
The AI model improved the function of two Yamanaka factors by over 50%, based on early lab results, significantly increasing cell reprogramming efficiency.
Unlike DeepMind's AlphaFold, which predicts protein structures, GPT-4b micro suggests protein modifications, targeting proteins that are inherently "floppy" and hard to model.
The AI used a "few-shot" prompting technique, allowing it to generate creative protein redesigns that surpassed human-designed versions.
Sam Altman, CEO of OpenAI and Retro's primary investor, reportedly funded Retro Biosciences with $180 million, sparking questions about potential conflicts of interest.
Researchers acknowledge that the AI's decision-making process remains opaque, much like other complex AI systems such as AlphaGo.
OpenAI has framed this project as a proof of concept for its ability to contribute to scientific discovery, though the model is not yet publicly available.
Forward Future Takeaways:
This development hints at a future where AI accelerates breakthroughs in biotechnology, potentially revolutionizing longevity science and regenerative medicine. By making cell reprogramming more efficient, OpenAI's GPT-4b micro could pave the way for advancements in creating replacement tissues, rejuvenating organs, and even extending human lifespans. However, the initiative also raises ethical and governance questions about transparency, conflicts of interest, and the integration of AI-driven discoveries into mainstream science. If successful, this experiment could redefine the role of AI in solving some of humanity's most complex biological challenges.ā Read the full article here.
š AI IMPACT
Generative AIās Environmental Impact: The Hidden Costs of AIās Gold Rush
The Recap: Generative AI, while transformative in its applications, comes with significant environmental costs due to its massive energy consumption, water usage, and hardware production demands. MIT researchers delve into how this rapidly advancing technology is straining resources and propose the need for a systematic approach to balance innovation with sustainability.
Training and deploying generative AI models, like GPT-4, require vast amounts of electricity, contributing to increased carbon emissions and placing stress on power grids.
Data centers, critical for AI operations, consumed 460 terawatt-hours of electricity globally in 2022, nearing the electricity usage of entire nations like France, with consumption expected to double by 2026.
Data centers rely heavily on chilled water for cooling, consuming two liters of water per kilowatt-hour of energy used, which strains municipal water supplies and impacts ecosystems.
GPUs, essential for generative AI workloads, are energy-intensive to manufacture and rely on environmentally harmful mining and processing methods for raw materials.
Frequent releases of larger, more complex AI models lead to the rapid obsolescence of older versions, wasting significant energy and resources from prior training cycles.
Generative AI operations, such as ChatGPT queries, consume roughly five times more electricity per use than a standard web search, reflecting higher inference demands compared to traditional AI.
The ease of use and lack of user awareness about AIās environmental costs result in unchecked consumption, further compounding the issue.
Forward Future Takeaways:
Generative AIās rapid growth could prove unsustainable without a serious rethinking of its environmental footprint. As electricity demand soars and water and material supplies face unprecedented strain, policymakers, researchers, and companies must adopt holistic frameworks to measure and mitigate the trade-offs between AIās societal benefits and ecological costs. A shift toward sustainable AI practicesāsuch as better energy efficiency, renewable energy integration, and hardware recyclingāwill be essential to ensure that this revolutionary technology does not jeopardize long-term planetary health. ā Read the full article here.
š¬ RESEARCH PAPERS
HuatuoGPT-o1 Pushes AI into Medical Reasoning with Verifiable Problem Solving
Researchers unveiled HuatuoGPT-o1, a medical language model designed to tackle complex reasoning tasks in healthcare. Unlike prior AI systems focused on mathematics, HuatuoGPT-o1 introduces a verifiable approach to evaluate and refine medical reasoning accuracy.
Its two-stage processāusing verifiers for guided reasoning and reinforcement learning for improvementāenables it to outperform both general and medical-specific AI models with just 40,000 test cases. The findings highlight how reinforcement learning amplifies performance, signaling potential breakthroughs for AI in specialized fields like medicine and beyond. ā Read the full paper here.
š½ļø VIDEO
Sakana AI Unveils TransformerĀ²
Sakana AIās TransformerĀ² introduces self-adaptive large language models capable of updating their weights during inference, enhancing efficiency and dynamic task handling. This open-source method promises scalable learning without costly retraining, advancing AI's flexibility. Get the full scoop in Mattās latest video! š
šļø FEEDBACK
Help Us Get Better
What did you think of today's newsletter? |
|
Login or Subscribe to participate in polls. |
Reply to this email if you have specific feedback to share. Weād love to hear from you.
š¤ THE DAILY BYTE
AI Isnāt Just HypeāOpenAIās CFO Sarah Friar Talks Safety, Smarts, and Strategy
CONNECT
Stay in the Know
Thanks for reading todayās newsletter. See you next time!
The Forward Future Team
š§āš š§āš š§āš š§āš
Reply