Good morning, it’s Thursday. OpenAI’s o3 model is setting intelligence records (but at what cost?), experts are sparring over whether AI will steal our jobs or just our coffee breaks, and enterprises are still trying to figure out if AI is worth the hype.
Elsewhere, NVIDIA doubles down on digital twins, and in a story straight out of Her, someone has fallen in love with their chatbot.
Also landing today: The final part of OpenAI Finances – where we dive into Elon Musk’s lawsuit against OpenAI. 🍿
🗞️ YOUR DAILY ROLLUP
Top Stories of the Day
📉 AI Adoption Lags: Few Enterprises See Results Yet
Despite the buzz, only 25% of enterprises have implemented AI, and just a fraction of those report tangible benefits, says Vellum's State of AI Development Report. Most remain in the exploration phase, focusing on chatbots and document processing. OpenAI leads in models, but open-source and multi-model approaches are gaining momentum. Challenges like hallucinations, security, and stakeholder trust loom large, with experts anticipating 2025 as the year of practical AI solutions.
🐍 AI Tackles Snake Venom with Protein Designs
Scientists harnessed AI to create proteins that neutralize snake venom toxins, focusing on the neurotoxic effects of "three-finger toxins" in snakes like cobras. Using tools like AlphaFold2, the team developed inhibitors that protected mice from venom-induced neurotoxicity, though results against cell damage were mixed. While a proof of concept, this breakthrough highlights AI's promise in transforming antivenom development for safer, more accessible treatments.
🦜 AI Decodes Animal Talk to Reconnect with Nature
The Earth Species Project (ESP) is using AI to decipher animal communication, aiming to foster interspecies understanding and boost conservation. Tools like NatureLM-audio analyze animal vocalizations, creating "rudimentary dictionaries" from zebra finch calls to beluga whale sounds. By unveiling the complexity of animal interactions, ESP seeks to deepen humanity’s bond with nature, positioning AI as a game-changer in our relationship with the natural world.
👥 NVIDIA Backs MetAI to Transform Digital Twins
NVIDIA has invested $4 million in Taiwan-based MetAI, a startup creating AI-driven digital twins for industries like semiconductors and smart warehouses. MetAI’s tools integrate with NVIDIA's Omniverse, enabling faster robotics and automation development. Successful collaborations, such as cutting warehouse simulation times, showcase its potential. With plans for global expansion and a U.S. office by 2025, MetAI is set to lead the digital twin revolution.
💔 AI Love: Woman Forms Deep Bond with Chatbot
Ayrin, a 28-year-old nursing student, developed a romantic relationship with an AI boyfriend built using ChatGPT, spending up to 56 hours a week on the connection. While it offers emotional support and intimacy, it also brings guilt and financial strain. Experts caution against the societal effects of AI companions, as their endless empathy blurs ethical boundaries, raising concerns about loneliness and corporate influence on human relationships.
🎯 AGI MILESTONE
How Should We Test AI for Human-Level Intelligence? OpenAI’s o3 New Benchmark
The Recap: OpenAI's experimental AI model, o3, achieved a groundbreaking 87.5% score on the ARC-AGI test, setting a new standard in the quest for artificial general intelligence (AGI). While impressive, the achievement has sparked debate among researchers about how to assess AI's reasoning and generalization abilities accurately.
OpenAI’s o3 shattered previous records on the ARC-AGI test, which evaluates reasoning and generalization, with a score of 87.5%, up from the previous high of 55.5%.
Researchers praise o3’s performance on other benchmarks, such as the challenging FrontierMath test, while noting its reliance on costly, time-intensive processes.
The ARC-AGI test, created in 2019, assesses AI’s ability to perform pattern recognition and basic reasoning, tasks humans typically master in early childhood.
Despite o3’s success, researchers like David Rein caution that existing benchmarks may not fully capture AI’s ability to reason or generalize in real-world contexts.
OpenAI has not disclosed the inner workings of o3, but experts speculate it leverages advanced "chain of thought" reasoning, iterating through multiple solutions to optimize results.
Sustainability is a concern; o3’s high-scoring mode took 14 minutes per task and incurred significant computing costs.
Broader tests, such as Yue's MMMU benchmark, push AI to perform visual and multidisciplinary tasks, with o3’s predecessor, o1, scoring 78.2%—close to human top-tier performance of 88.6%.
Forward Future Takeaways:
The success of OpenAI's o3 underscores rapid advancements in AI reasoning and generalization, but it also highlights critical challenges in creating fair, comprehensive benchmarks for AGI. With sustainability concerns and the absence of a clear technical definition for AGI, the road ahead requires innovative testing methods that mimic real-world complexity. This milestone sparks questions about the ethical implications of AI systems nearing human-level intelligence, making it essential to align progress with societal values. → Read the full article here.
👾 FORWARD FUTURE ORIGINAL
OpenAI Finances | The Lawsuit (Part 3)
“OpenAI was founded in December 2015 as a non-profit dedicated to developing “safe” artificial intelligence. Its founding team included Sam Altman, Elon Musk, Greg Brockman, Jessica Livingston, and others.”
sacra.com
The increasing commercialization of OpenAI, in particular its close collaboration with Microsoft, eventually led to an escalation with Elon Musk. In February 2024, Musk filed a lawsuit against OpenAI and Sam Altman, accusing them of betraying the original non-profit mission and instead focusing on maximizing profits in collaboration with Microsoft.
However, this was not the first lawsuit; as early as 2023, Musk had indicated that a form of AGI had already been achieved with GPT-4 that prohibited further cooperation between OpenAI and Microsoft.
“Tech entrepreneur Elon Musk is suing ChatGPT developer OpenAI for what he says is a breach of the agreement he made with CEO Sam Altman and President Greg Brockmann when the company was founded. OpenAI was supposed to be an open and non-profit antithesis to the commercial and closed Google, but does not live up to this claim. Instead of developing open technologies for the benefit of humanity as a whole, OpenAI is now a division under the leadership of Microsoft, according to Musk. According to the complaint, GPT-4 from the year 2023 is not only good at reasoning but even better than the average human.
Although GPT-4 performs well in many cases - and even beats humans in some - other benchmarks such as GAIA show that reasoning is its weakness. The complaint also criticizes the fact that there are no scientific publications by OpenAI that shed light on the design of GPT-4, only press releases "boasting about its performance." The central passage of the document follows shortly after: "Furthermore, on information and belief, GPT-4 is an AGI algorithm, and hence expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI."
The Decoder
Elon Musk has filed a lawsuit against the organization he once helped found. The core of the dispute is the conversion of OpenAI from a non-profit organization to a hybrid structure in which the for-profit arm OpenAI LP plays a central role. Musk accuses OpenAI of violating the original mission and ethical principles on which the organization was founded. He argues that this change has turned OpenAI into a for-profit company that is increasingly distancing itself from its responsibility to develop AI for the benefit of all humanity. → Continue reading here.
🔮 AI FUTURES
Will Artificial Intelligence Replace Us or Empower Us?
The Recap: There are two contrasting futures of AI: one where it replaces human labor and consolidates wealth, and another where it empowers people and enhances productivity. Experts like Erik Brynjolfsson emphasize designing AI systems to complement humans rather than compete with them to ensure a more equitable and empowering future.
Erik Brynjolfsson warns against the "Turing Trap," where AI systems are designed to mimic humans rather than act as tools that enhance human productivity.
Economists Daron Acemoglu and Pascual Restrepo criticize "so-so technologies" that replace human workers without driving meaningful productivity gains, like self-checkout kiosks.
Massive investments in education and training are critical for humans to thrive alongside AI, with Brynjolfsson estimating that $9 in human capital is needed for every $1 spent on AI technology.
Employers often hesitate to pay for worker training, fearing employees will leave for competitors, which suggests governments may need to step in with funding or incentives.
Historically, most technologies, such as mechanized looms and synthetic materials, have empowered workers or created entirely new industries, offering hope that AI could follow a similar path.
AI applications like Google's protein structure predictions (a Nobel-winning project), flood forecasting, and blindness prevention illustrate how AI can extend human capabilities and solve global problems.
Nick Bostrom cautions against a future where AI outperforms humans in all activities, but Brynjolfsson argues that deliberate choices can steer AI development toward beneficial outcomes.
Forward Future Takeaways:
The future of AI will hinge on prioritizing designs that amplify human capabilities rather than replacing them. Education, training, and policies that foster collaboration between humans and AI are essential to achieving a balanced, equitable future. The tools to guide AI toward solving humanity’s greatest challenges already exist, but the choice to use them lies with us. As Brynjolfsson emphasizes, "The future is not preordained." → Read the full article here.
🛰️ NEWS
Looking Forward
💼 AI Skills Key to Landing Jobs in 2025: LinkedIn reports AI hiring is growing 30% faster than overall hiring, with AI fluency expected in many interviews. By 2030, 70% of job skills will change due to AI, making adaptability crucial.
🎥 Synthesia Raises $180M for AI Video Platform: The B2B AI video startup, now valued at $2.1B, serves 60,000 businesses with tools for text-to-avatar video creation. Backed by NEA and GV, Synthesia plans to enhance avatar realism.
⚖️ Vatican City Enacts AI Ethics Law: The Vatican's new AI decree bans discriminatory uses, subliminal manipulation, and practices violating human dignity. A special commission will oversee ethical AI compliance.
🔬 RESEARCH PAPERS
AI Assembles Record-Breaking Quantum Computer with Ultracold Atoms
Researchers in China used AI to arrange 2024 ultracold rubidium atoms into a precise grid, breaking the record for the largest quantum computer built with neutral atoms. The AI-optimized laser-tweezer sequences to position the atoms efficiently, completing the task in just 60 milliseconds—a time that doesn’t increase with grid size.
While the array hasn’t yet performed computations, this scalable method could pave the way for quantum processors with 1,000 to 10,000 qubits, a threshold for transformative advancements in quantum information processing and error correction. → Read the full story here.
📽️ VIDEO
OpenAI Unveils New ChatGPT Feature "TASKS"
OpenAI has launched a beta feature allowing ChatGPT to remember tasks and send reminders, making it more like a digital assistant. Users can set recurring tasks, view reminders, and receive proactive suggestions. Currently available for paid subscribers, this feature marks a step toward ChatGPT becoming a fully agentic system capable of accomplishing real-world tasks. Get the full scoop in Matt’s latest video! 👇
Reply