Good morning, and happy Friday! We’ve got major headlines, two deep dives, and some Friday Facts to kick off the weekend.
We’re again looking at the uncertain future of U.S. AI regulation. While some states are pushing forward with their own restrictive AI laws, the federal government still lacks a unified approach. With Trump now president-elect, the direction of national AI policy could shift—will his administration bring clarity or deepen the divide?
In other news, AI is making waves in quantum computing, and Wendy’s is adopting AI to streamline supply chain and inventory management.
Top Stories 🗞️
AI Poised to Outshine Quantum Computing in Simulation 🤖
AI Regulation Stalled, but Path Forward Emerges 🚦
[FF Original] How Good Is It? 👾
[ICYMI] Interview with Saturnin Pugnet, Visionary Founder of World ID 📽️
Tools Transforming Workflows, Insights, and Media 🧰
🗞️ YOUR DAILY ROLLUP
Top Stories of the Day
Anthropic Joins Forces with Defense Customers
Anthropic partners with Palantir and AWS to integrate its Claude AI models into U.S. defense systems, aiming to boost secure data analysis and decision-making for national security.
AI Boosts Efficiency in Oil & Gas
SLB's AI-driven edge technology enhances oil and gas operations, providing real-time insights that improve production, reduce emissions, and enable autonomous processes, reshaping industry sustainability and efficiency worldwide.
Wendy’s Adopts AI for Supply Chain
Wendy’s partners with Palantir to streamline inventory management, autonomously forecasting shortages and optimizing supply for high-demand items, aiming to improve ROI and set industry standards.
Mistral Unveils Multilingual Moderation API
Mistral’s new API uses its Ministral 8B model to classify content across languages, providing scalable moderation for categories like hate speech. Ongoing refinement aims to enhance accuracy and reduce bias.
Microsoft Adds AI to 365 Subscriptions
Microsoft integrates Copilot AI features into Microsoft 365 Personal and Family plans, offering monthly AI credits and raising subscription costs, replacing the standalone Copilot Pro subscription.
🤖 COMPUTING
Why AI Could Ultimately Eat Quantum Computing’s Lunch
The Recap: Rapid advancements in AI-driven simulation are posing a real challenge to quantum computing’s anticipated role in complex fields like chemistry and materials science. While quantum computing holds theoretical advantages for certain problems, AI is showing it may reach useful milestones much sooner.
New neural network models have made AI the leading tool for simulating complex materials and quantum effects, particularly in weakly correlated systems.
AI-powered simulations, built on classical tools like density functional theory, can model systems of up to 100,000 atoms, handling most practical chemistry and materials problems with ease.
Companies like Meta and DeepMind have made AI models that excel in material discovery and modeling, challenging the need for near-term quantum applications.
AI approaches pioneered by researchers like EPFL’s Carleo are proving effective for highly complex quantum systems, typically the domain of quantum computing.
High costs, complexity, and a need for millions of qubits mean large-scale quantum computers are likely still decades away from surpassing classical or AI-driven simulations.
AI models require massive data for training, often generated through costly simulations, creating a resource barrier for many research teams.
Alongside AI, other classical methods for quantum simulation continue to advance, suggesting a competitive landscape rather than a clear AI or quantum dominance.
Forward Future Takeaways: As AI methods become more powerful and accessible, they could fulfill a significant portion of the tasks once reserved for quantum computing. This rapid AI progress may prompt quantum research to refocus on a narrower set of complex applications, like high-temperature superconductors, where quantum retains unique advantages. In the long run, AI’s gains might reduce the urgency for large-scale quantum machines, reshaping expectations and investments in both fields. → Read the full article here.
👾 FORWARD FUTURE ORIGINAL
The Art & Science of AI Evaluation
In the previous article, we introduced the notion of how evaluating LLM outputs is distinct from testing standard software output, because unlike the latter, the former is not deterministic. Let’s now take an end-to-end example to illustrate this, integrating other concepts we’ve covered in this series.
Let’s take the enterprise situation of a clothes store implementing a product-enquiry chatbot for use by its (potential) customers. The chatbot pops up on the store website and can answer all sorts of questions in relation to the product catalog.
Let’s see how the generative AI concepts we touched upon come into action in such a scenario.
Firstly, there is the question of product classification (or taxonomy) within the catalog - this is something we were used to, even before AI came along. You might have waded through a taxonomy tree such as mens > summer > casual, or womens > winter > travel.
This can be seen as part of the deterministic setup, and is typically stored in a conventional database (in other words structured data) as part of the conventional software setup. Nothing new there per se. Where it gets interesting, of course, is when this can be opened up for users to query in natural language, which is where our LLM ideas come in with a bang. → Continue reading here.
🚦 POLICY
U.S. AI Regulation Stalled by Industry Resistance, But Experts See a Path Forward
The Recap: The U.S. continues to grapple with AI regulation, with a mix of state-level initiatives and federal actions, but comprehensive national policy remains elusive. Despite challenges, experts see potential in existing laws and rising momentum for a unified approach that could drive meaningful AI governance.
Tennessee protected voice artists against AI cloning, and Colorado implemented a risk-based AI policy approach.
Although California passed various AI-related bills, Governor Newsom vetoed SB 1047, a significant transparency and safety regulation bill, after industry opposition.
The FTC has cracked down on data misuse in AI, and the FCC proposed rules requiring disclosures for AI-generated political ads.
The AI Executive Order established the U.S. AI Safety Institute, which assesses AI risks in collaboration with top AI labs, though its existence depends on executive support.
Tech leaders and investors like Meta’s Yann LeCun and Khosla Ventures' Vinod Khosla have actively opposed tighter AI regulations, citing concerns over hindering innovation.
With nearly 700 state-level AI bills introduced this year, pressure is building for a coherent national framework to avoid a fragmented regulatory landscape.
UC Berkeley’s Jessica Newman and California State Senator Scott Wiener believe the challenges have set the stage for future regulatory efforts, with increasing calls for federal policy.
Forward Future Takeaways: While the path to cohesive AI regulation in the U.S. faces hurdles, the urgency for oversight is growing. As AI’s potential risks and societal impacts become more apparent, industry resistance may soften in favor of stability and clarity in regulation. With substantial public and legislative interest, federal policy could emerge, steering AI development in a safer, more consistent direction. → Read the full article here.
🛰️ NEWS
Looking Forward: More Headlines
Grounding LLMs for Science: The paper presents a method to enhance LLMs in scientific tasks by adapting tool usage for improved accuracy.
Apple Explores Smart Glasses: Apple's internal "Atlas" study assesses smart glasses, signaling potential interest in wearable tech to compete with Meta.
Lyft Inks Robotaxi Deals: Lyft collaborates with Mobileye and May Mobility to introduce robotaxis, advancing its position in autonomous ride-hailing.
Microsoft Tests Xbox AI Chatbot: Microsoft’s new AI chatbot assists Xbox Insiders with support, adding personalization options for enhanced user experience.
OpenAI Hires Meta’s AR Lead: OpenAI appoints Caitlin Kalinowski to lead robotics and hardware, indicating plans for AI-driven consumer products.
📽️ INTERVIEW
ASI/AGI, Open vs Closed AI, Founding with Sam Altman, and Verifying Humanness!
In case you missed it, I recently interviewed Saturnin Pugnet, the visionary founder of WorldID. In this conversation, we explore his mission to create a future where people can securely verify their human identity in a world increasingly populated by AI. We dive into the groundbreaking infrastructure behind WorldID and what it means for the future of digital identity, privacy, and trust. Check out the full interview below! 👇
🤔 FRIDAY FACTS
How Much Energy Does a ChatGPT Query Use?
A single ChatGPT query consumes approximately 2.9 watt-hours (Wh) of electricity, which is roughly equivalent to the power needed for 10 Google searches.
This may seem small, but with millions of queries processed every day, the collective energy usage becomes substantial. For instance, if ChatGPT processes 10 million queries in a day, that’s roughly equivalent to the average daily electricity consumption of over 3,200 U.S. households.
Reply