Good morning, it’s Tuesday. We’re diving into the rare political unicorn: a topic Kamala Harris and Donald Trump might actually agree on. While AI is remaking industries and prompting talk of new regulations, both candidates have mostly sidestepped it on the campaign trail. But beneath the silence, bipartisan AI policies are still humming along.
In other news: MIT's new AI technique accelerates versatile robot training, AI is revolutionizing urban disaster response, and Perplexity’s CEO faces backlash for offering AI support amid the NYT strike.
Top Stories 🗞️
A Rare Point of Policy Agreement in 2024 Campaign 🤝
Faster, Smarter Robot Training 🤖
[FF Original] Of Parrots and Parallel Worlds 👾
[New Video] IBM Unveils Granite 3.0 📽️
Tools Transforming Research and Media 🧰
🗞️ YOUR DAILY ROLLUP
Top Stories of the Day
Runway’s New Gen-3 Camera Control
Runway’s Gen-3 Alpha Turbo now offers Advanced Camera Control, allowing precise adjustments to camera movement, enabling creators to craft intentional, dynamic shots and explore scenes more creatively.
AI Becomes Essential in Urban Disaster Response
AI is revolutionizing urban disaster response with faster forecasts and damage assessments, crucial for rising urban populations. UN initiatives and tech innovations enhance real-time detection and emergency alerts.
Meta’s Nuclear AI Data Center Blocked
Meta’s nuclear-powered AI data center plans were halted due to endangered bees on site, adding regulatory challenges. Competitors advance with nuclear energy, while Meta pursues carbon-free options.
Perplexity CEO Proposes AI Amid NYT Strike
Perplexity CEO Aravind Srinivas offered AI support during a NYT worker strike, drawing criticism for undermining labor action. This follows recent legal tensions over content use.
Claude 3.5 Sonnet’s New PDF Analysis Tool
Claude 3.5 Sonnet introduces Visual PDFs for analyzing text and images, available to paid users. While effective with smaller files, ChatGPT Plus handles larger datasets better, positioning Claude as a limited but useful alternative.
Anduril Weighs U.S. Sites for Arsenal-1 Facility
Anduril plans a massive 5-million-square-foot plant, Arsenal-1, in Arizona, Ohio, or Texas to produce autonomous military systems, marking a shift toward agile, software-driven defense manufacturing.
🤝 UNITED IN AI
Presidential AI Policy: A Rare Point of Agreement in a Divisive Campaign
Illustration by The Atlantic
The Recap: Despite mounting concern about AI’s transformative impact on jobs and the economy, U.S. presidential candidates Kamala Harris and Donald Trump have mostly avoided the topic. Interestingly, they might actually agree on some AI policy issues, even though partisan differences still complicate potential regulations.
Both Trump and Biden administrations have supported AI policy growth, focusing on safety, standards, and workforce preparation, building on Obama-era foundations.
Despite past alignment on AI initiatives, Trump has recently criticized Biden’s AI policies as “woke,” despite many of these policies being continuations of his own administration's efforts.
Federal AI initiatives have aimed to expand AI expertise, drive international collaboration, and support innovation while preparing for automation’s potential labor impacts.
AI hasn’t taken center stage in campaign rhetoric but looms large in voter concerns, with bipartisan support for regulation.
Public worry over AI-fueled risks, including disinformation, fraud, and job automation, has been rising, with both parties supporting tighter AI oversight.
Despite limited campaign focus, AI regulation is becoming increasingly important as it’s poised to shape the economic and social landscape.
The next administration’s approach to AI could significantly shape U.S. policy and its role in international AI governance.
Forward Future Takeaways: As AI continues to reshape industries and labor markets, bipartisan support could help drive responsible AI policy—but only if partisan politics don’t derail it. While Harris and Trump may not see eye-to-eye on much, a mutual understanding of AI’s importance could sustain momentum toward meaningful regulation that aligns with public concern and economic interests. The next president will likely have a decisive influence on this frontier, balancing innovation with safeguards to address widespread anxieties around AI’s societal impacts. → Read the full article here.
👾 FORWARD FUTURE ORIGINAL
Of Parrots and Parallel Worlds
We started this series by raising the question: are language models just parrots? In that context, we discussed the notion of non-deterministic, probabilistic, and stochastic (save this one for the next dinner party!). Let’s explore some of the implications of this.
As we touched upon in the first article, it is not easy to know how language flows: you cannot necessarily guess what your interlocutor is going to say next. Thus human speech, as used by humans, can be seen as inherently probabilistic itself.
❝
Human speech is inherently unpredictable
The beauty of the transformer architecture / large language models (LLMs) is that we have successfully captured this essence in a machine. Well, fully successfully? Not quite, as we shall see.
We’ve used the term ‘next-word-predictor’ to describe an LLM, and in the strictest sense, that’s how the architecture is implemented - one word at a time. If you think about it, it seems quite miraculous that this alone can entail the generation, by these LLMs, of coherent, comprehensible, and even cogent language! → Continue reading here.
🤖 ROBOTICS
A New Approach to Training Versatile Robots Quickly and Efficiently
The Recap: Inspired by large language models (LLMs), MIT researchers developed a method called Heterogeneous Pretrained Transformers (HPT) to train general-purpose robots. This technique merges data from different sources, creating a shared "language" for robots, enabling them to learn diverse tasks more rapidly and cost-effectively.
Traditional robot training requires task-specific data, limiting adaptability and raising costs.
HPT combines data from varied sources like simulations and sensors, creating a universal "language" that generative AI can understand.
Using diverse data, HPT allows robots to learn new tasks without starting from scratch, outperforming traditional methods by over 20% in tests.
Inspired by LLMs, HPT pretrains on a broad dataset and fine-tunes with specific robot data, enhancing flexibility.
The transformer architecture in HPT processes both vision and proprioception data, crucial for executing complex motions.
Pretrained on a massive dataset, HPT scales up robotic learning and adapts to different robot designs.
Future goals include enabling HPT to process unlabeled data, aiming for a "universal robot brain" that can be downloaded and deployed.
Forward Future Takeaways: HPT could revolutionize robotics, offering faster, cost-effective training for adaptable robots suited to various tasks and environments. If successful, this approach could lay the groundwork for highly versatile robotic systems, making robots that can seamlessly transition between tasks—much like LLMs have adapted across language functions—paving the way for more accessible robotics in everyday settings. → Read the full article here.
🛰️ NEWS
Looking Forward: More Headlines
AI “Interview” Controversy: Off Radio Krakow’s AI-generated interview with late poet Wisława Szymborska faced backlash, raising ethical concerns.
AI Increases Power Costs: AI data centers from major tech firms are increasing electricity costs for consumers, prompting regulatory concerns.
📽️ VIDEO
IBM Launches Granite 3.0: Open-Source Small Model Suite
IBM recently invited me out to their New York HQ to showcase Granite 3.0—a powerful, open-source language model designed for efficient enterprise use on low-power devices. Complementing Granite, InstructLab introduces an “alignment” technique, letting companies add proprietary data without full retraining. Thanks to IBM for the invite and for partnering with us on our most recent video! 🙏
Reply