🗞️ YOUR DAILY ROLLUP
Top Stories of the Day
Creatives Protest AI Data Scraping
Over 11,500 creatives, including notable figures, petition against AI companies' unlicensed use of their works for training, citing threats to livelihoods and pushing for opt-out data models.
Meta Launches AI to Evaluate AI
Meta's new "Self-Taught Evaluator" AI autonomously assesses other AI models' work, reducing human involvement and improving accuracy in tasks like coding and math, potentially enabling self-improving AI agents.
Portugal Must Retrain Workforce in AI
A McKinsey report urges Portugal to retrain 1.3 million workers in AI by 2030 to close its productivity gap with the EU, requiring significant reskilling investments across sectors.
DeepMind CEO on AI's Watershed Moment
DeepMind CEO Demis Hassabis views AI's recent contributions to science, including a Nobel Prize, as a pivotal moment, predicting its underestimated long-term potential to revolutionize fields like drug development and beyond.
US AI Safety Institute Faces Uncertainty
The U.S. AI Safety Institute, crucial for setting AI standards, risks defunding without formal Congressional authorization, threatening its stability and the country's global AI leadership.
Ex-OpenAI CTO Mira Murati Raising $100M
Former OpenAI CTO Mira Murati is raising $100 million for her new AI startup, which will likely compete with OpenAI by developing advanced proprietary AI models, reflecting a broader trend among AI leaders.
☝️ POWERED BY MAMMOUTH AI
Access the Best AI Models in One Place for $10
Get access to the best LLMs (GPT-o1, Claude 3.5, Llama 3.1, chatGPT-4o, Gemini Pro, Mistral) and the best AI generated images (Flux.1 Pro, Midjourney, SD3, Dall-E) in one place for just $10 per month. Enjoy on mammouth.ai.
🧠 SUPERINTELLIGENCE
AI Pioneer Challenges AI Hype, Criticizing Predictions of Superhuman Intelligence
The Recap: Yann LeCun, a key figure in AI research and Chief AI Scientist at Meta, believes the current hype about AI surpassing human intelligence is premature. While AI has made significant advances, he argues that it's still far from reaching the capabilities of even a house cat, despite claims from other industry leaders.
LeCun, a pioneer in AI and co-winner of the Turing Award, remains skeptical about the imminent rise of superintelligent AI.
He considers warnings of AI posing an existential threat as exaggerated and dismisses them as "complete B.S."
Despite significant contributions to AI development, LeCun criticizes today’s AI systems as lacking genuine intelligence, claiming they are just highly sophisticated language predictors.
He has publicly disagreed with fellow AI leaders like Elon Musk, Geoffrey Hinton, and Yoshua Bengio regarding the dangers of future AI.
LeCun argues that, while AI is a powerful tool, developing human-like intelligence will likely take decades and won’t be achieved through current large language models (LLMs).
Meta’s AI developments, especially in real-time translation and content moderation, have been transformative for the company’s growth, but LeCun maintains they are far from human-level intelligence.
His vision for future AI focuses on creating systems that learn similarly to animals, suggesting that new approaches are needed beyond today’s data-heavy models.
Forward Future Takeaways:
LeCun’s skepticism about near-term AI superintelligence suggests that many current AI investments may be betting on an overhyped future. His belief that current methods won’t yield human-like intelligence signals that a shift in research priorities, focusing on more biologically inspired learning, could be key in shaping AI’s long-term development. If his views hold, businesses banking on rapid AI advancements may need to lower expectations. → Read the full article here.
💻️ AI MODELS
Anthropic Unveils Claude 3.5 Models and Breakthrough AI "Computer Use" Feature
The Recap:
Anthropic has launched two upgraded AI models: Claude 3.5 Sonnet and Claude 3.5 Haiku. Additionally, they’ve introduced a groundbreaking new feature—public beta access to "computer use," allowing AI to operate computer interfaces like a human.
Claude 3.5 Sonnet shows improved performance in coding, surpassing all publicly available models for agentic coding tasks.
"Computer use" enables Claude to navigate interfaces by clicking, typing, and using screens, with early adopters like Replit and GitLab exploring complex, multi-step automation.
Claude 3.5 Sonnet’s performance in coding and tool use benchmarks, such as SWE-bench and TAU-bench, has improved significantly.
The newly released Claude 3.5 Haiku offers enhanced speed, low latency, and outperforms previous models in coding tasks at a lower cost.
Anthropic has partnered with the US and UK AI Safety Institutes for pre-deployment testing of Claude 3.5 Sonnet.
Early testing of Claude’s "computer use" is experimental, with challenges in interface actions like scrolling and dragging.
Safety protocols, including classifiers for misuse detection, accompany this release to mitigate risks like spam or fraud.
Forward Future Takeaways:
Anthropic’s upgraded Claude models and the introduction of AI-driven "computer use" point to an era of more autonomous AI capabilities in software development and everyday tasks. While still in its early stages, this technology could significantly transform how AI systems interact with human tools, making AI even more integral to workflows and user experiences across industries. → Read the full article here.
👾 FORWARD FUTURE ORIGINAL
Scale Is All You Need? Part 3-3
This article is a Forward Future Original by guest author, Kim Isenberg.
After analyzing the increasing demand for computing and energy, it becomes clear that the development towards AGI is not only a technological challenge, but also an infrastructural one. Enormous financial, technological and energy resources are required to operate these massive data centers. However, these requirements are not equally available in all regions of the world.
The question of who can afford the necessary infrastructure – such as large data centers and the corresponding power consumption – leads us to another critical point on the road to AGI: social and global inequality. While wealthy nations and large tech companies are able to invest immense sums in building and operating data centers, poorer countries often face insurmountable hurdles. Access to the resources needed for AI applications is increasingly becoming a question of global competition and the distribution of power.
The production of electricity is another crucial element that has not only technological but also geopolitical implications. Countries that have abundant energy sources or are able to generate enormous amounts of energy have a decisive advantage. For other countries that either do not have sufficient access to such resources or whose infrastructure is not designed to meet increasing demand, this can lead to a growing dependency on the technological and energy superpowers.
📽️ VIDEO
o1's Superiority Questioned in Use Cases
A deep dive into OpenAI o1's use cases reveals that, for 95% of tasks, GPT-4 performs just as well. Despite its advanced reasoning and problem-solving capabilities, o1 is only necessary for a narrow set of high-complexity use cases, making its advantages less relevant for most users. Get the full scoop in our latest video! 👇
Reply