Good morning, it's Friday. AI is getting louder, faster, hungrierâlike a model trained on Daft Punk. ElevenLabs just secured $180M to make synthetic voices indistinguishable from human ones, NVIDIA's RTX 5090 is shattering benchmarks, and California is flirting with nuclear power to feed AI's insatiable energy appetite.
Meanwhile, Google quietly dropped Gemini 2.0 Pro Experimental, and if you thought AI art could be copyrighted, the US Copyright Office just said, "Nice try."
đ¤ FRIDAY FACTS
What was the first AI to beat a world champion in a game of chess?
Stick around to find out the answer! đ
đď¸ YOUR DAILY ROLLUP
Top Stories of the Day
đď¸ ElevenLabs Raises $180M, Hits $3.3B Valuation
ElevenLabs, the AI audio startup specializing in hyper-realistic synthetic voices, secured a $180 million Series C round, bringing its valuation to $3.3 billion. The funding will advance its AI voice and multimodal models for media, gaming, and telecom while expanding consumer products like AI-generated audiobooks. As it powers brands like ESPN and The Atlantic, the company is also ramping up AI safety efforts amid deepfake concerns.
đ Google Unveils Gemini 2.0 Pro Experimental AI Model
In a subtle yet significant move, Google has introduced Gemini 2.0 Pro Experimental, its latest flagship AI model, as noted in a recent changelog for the company's Gemini chatbot app. Building upon the multimodal capabilities of its predecessors, Gemini 2.0 Pro Experimental is designed to enhance performance across various tasks, including text, image, and audio generation.
đ AI Art Canât Be Copyrighted, Rules US Copyright Office
The US Copyright Office has ruled that purely AI-generated works arenât eligible for copyright, reinforcing that human creativity is required for protection. AI-assisted works, where humans make significant modifications, may still qualify, but AI-generated elements remain unprotected. This decision clarifies legal uncertainties for artists using AI while leaving room for future changes if AI tools evolve to offer greater creative control.
đŽ NVIDIA Unveils RTX 5090 & 5080 With DLSS 4
NVIDIA has launched the RTX 5090 and 5080, featuring AI-driven graphics powered by the new Blackwell RTX architecture. These GPUs double frame rates, enhance AI generation, and accelerate rendering. DLSS 4 enables ultra-smooth 4K gaming, pushing frame rates past 200 FPS in top titles. Creators get faster video encoding and better AI tools, while early reviews call the RTX 5090 the most powerful consumer GPU yet.
â˘ď¸ AIâs Energy Demands Revive Californiaâs Nuclear Debate
With AIâs soaring power consumption, California lawmakers are reconsidering nuclear energy, including extending Diablo Canyonâs life and lifting a ban on new reactors. Tech giants like Microsoft and Google are backing nuclear, sparking debate over small modular reactors as a solution. While nuclear offers carbon-free reliability, high costs and waste concerns remain hurdles. Whether Big Techâs push leads to real policy change is still uncertain.
âď¸ POWERED BY RECRAFT V3
Recraft V3 Is SOTA in Image Generation
Recraft V3 sets a new standard in image generation, outperforming all competitors according to Hugging Faceâs Text-to-Image Benchmark by Artificial Analysis. The main advantages of Recraft V3 are text generation quality, anatomical accuracy, prompt understanding, and aesthetic quality. Recraft V3 is the only model in the world that generates images with long texts. Now available via API with unique functionalities no other tool provides: the ability to generate image sets and vector art.
đŚ TECH MONSTER
Michael Crichtonâs Warning: When Technology Becomes the Monster
The Recap: Michael Crichtonâs approach to storytellingâwhere technology, rather than individuals, drives the plotâoffers a crucial lens for understanding Big Tech and artificial intelligence today. Cal Newport argues that our focus on tech billionaires like Elon Musk and Mark Zuckerberg misses the larger issue: technologies often escape human control, creating unintended consequences far beyond their creators' intentions.
Crichtonâs editor Robert Gottlieb advised him to strip away character depth in The Andromeda Strain, making technology itself the protagonistâan approach that shaped his later works like Jurassic Park.
Unlike Mary Shelleyâs Frankenstein, which emphasizes personal tragedy, Jurassic Park is about how technology evolves beyond its makersâ grasp, making it a superior cautionary tale.
In Jurassic Park, John Hammond isnât a sinister mastermind but a naive entrepreneur who fails to foresee the chaos he unleashes, mirroring Silicon Valleyâs reckless innovation.
The obsession with personalitiesâMuskâs erratic behavior, Zuckerbergâs âmasculine energyââdistracts from deeper structural issues with social media and AI.
Twitter/X was designed for mass engagement, but its core mechanics naturally promote outrage and misinformationâproblems that persist regardless of ownership.
Like social media, email became overwhelming not because of bad bosses but because its frictionless design encouraged endless communication. AI could follow the same path, becoming unmanageable despite good intentions.
Instead of focusing on tech moguls, we should ask whether these technologies serve us, and if not, how we can âblow them upâ metaphorically through regulation, alternative designs, or outright abandonment.
Forward Future Takeaways:
Crichtonâs stories highlight the power and unpredictability of emerging technologies, but rather than fearing AI as the next uncontrollable force, we should embrace its potential. Unlike Jurassic Parkâs dinosaurs, AI isnât a rogue entityâitâs a tool we actively shape. The real lesson isnât about halting progress but ensuring that AI serves humanityâs best interests. Instead of waiting for crises to dictate change, we should proactively build AI systems that are transparent, ethical, and aligned with societal goals. â Read the full article here.
â ď¸ CYBER THREATS
How Threat Actors Are Testing the Limits of Generative Models
The Recap: Googleâs Threat Intelligence Group (GTIG) has released a comprehensive analysis of how government-backed cyber threat actors and information operations (IO) actors attempted to misuse Googleâs AI-powered assistant, Gemini. While adversaries are experimenting with AI for research, content generation, and coding assistance, the report finds no evidence that AI has fundamentally changed cyber threatsâat least not yet.
Threat actors are using Gemini for common cyber activities like reconnaissance, research, and scripting, but they have not developed novel AI-driven attack techniques.
Iranian APT actors were the most active users, leveraging Gemini for reconnaissance, phishing campaigns, vulnerability research, and content manipulation for influence operations.
Chinese APT actors used Gemini to research U.S. military networks, reverse engineer security tools, and troubleshoot scripts for privilege escalation and data exfiltration.
North Korean actors used AI to research jobs, draft cover letters, and generate freelance proposals, likely supporting efforts to place IT workers in Western companies under false identities.
Unlike other nations, Russian APT actors showed minimal use of Gemini, possibly due to security concerns or preference for domestically controlled AI models.
Threat actors attempted basic jailbreaks using publicly available prompts but failed to bypass Geminiâs safety controls. AI proved useful for productivity gains but did not enable breakthrough capabilities.
Malicious actors are promoting jailbroken LLMs such as "FraudGPT" and "WormGPT" for phishing and malware development, indicating a growing black market for AI-assisted cybercrime.
Forward Future Takeaways:
While AI is accelerating cyber operations, it has not yet revolutionized themâmost threat actors still rely on traditional hacking methods with AI as an efficiency booster. However, as AI models grow more advanced and new agentic systems emerge, the threat landscape is expected to evolve. Google emphasizes the need for robust AI security frameworks, such as its Secure AI Framework (SAIF), to mitigate future risks. The takeaway? AI isn't yet a cyber "superweapon," but proactive defense is crucial to ensure it stays that way. â Read the full article here.
đ¤ AI CENSORSHIP
DeepSeekâs AI Chatbot Struggles with Censorship, In Real-Time
The Recap: DeepSeek, the Chinese AI chatbot, provides glimpses of censored information before abruptly deleting its own responses, revealing the delicate balancing act it performs under Chinaâs strict internet controls. While occasionally offering insights that are more open than domestic social media, the chatbot ultimately adheres to government narratives, self-censoring in both English and Chineseâwith the latter being more restrictive.
When asked about Chinaâs âZero Covidâ policies, DeepSeek initially provided a nuanced response, mentioning protests and public dissatisfactionâbefore wiping its own answer and replacing it with a vague disclaimer.
Questions about Xi Jinping or other high-ranking Chinese officials resulted in automatic refusals, with no biographical details provided, while lower-ranking figures received partial responsesâsometimes deleted mid-answer.
Inquiries about the war in Ukraine led to two different approaches: in English, the chatbot acknowledged Russiaâs "full-scale invasion", but in Chinese, it softened the language to align with Chinaâs official stance.
The chatbot exhibited real-time self-censorship, modifying answers to be progressively less sensitive when asked the same question repeatedly.
Certain censorship workaroundsâlike using letter substitutions to reference the Tiananmen Square Massacreâwere possible initially but were quickly patched by DeepSeekâs developers.
On the topic of Chinaâs internet censorship, DeepSeek openly acknowledged state regulations but framed them as necessary for âcybersecurity and social stability,â carefully avoiding the term âcensorship.â
Despite Chinaâs Great Firewall usually limiting censorship to domestic users, DeepSeek appears to enforce the same content restrictions internationallyâpotentially exporting Beijingâs digital control methods abroad.
Forward Future Takeaways:
DeepSeekâs self-censorship highlights the tension between AI's capacity for open-ended reasoning and Chinaâs strict control over information. The botâs behavior suggests an evolving struggle, with developers actively closing loopholes as they arise. If DeepSeek gains global traction, it could spread Chinaâs controlled narratives beyond its borders, reinforcing the growing intersection of AI and state propaganda. â Read the full article here.
đ°ď¸ NEWS
Looking Forward
đľ Riffusionâs AI Music Tool Is Free: The startupâs new AI, Fuzz, creates personalized songs from text, audio, or images. Unlike competitors, Riffusion offers its platform completely free worldwide.
âď¸ OpenAIâs AI to Aid Nuclear Research: OpenAI will provide its models to U.S. National Labs for nuclear security and scientific projects. The partnership, involving Microsoft, includes deployment at Los Alamos.
đ DeepSeek AI Exposed Sensitive Data: The rising Chinese AI startup left a database open, leaking secret keys, chat logs, and backend details. Security experts warn of major risks in AI data protection.
đ˝ď¸ VIDEO
Forward Futureâs First CES!
Matt shares his first CES experience, spotlighting NVIDIAâs RTX 5090, AI-powered gaming, and cutting-edge autonomous technology. He explores NVIDIAâs open-source projects, advanced robotics, and AI-driven video editing tools. Get the full scoop in Mattâs latest video! đ
đ¤ FRIDAY FACTS
Deep Blue Became the First AI To Defeat a Reigning World Chess Champion
The first AI to defeat a reigning world chess champion was IBMâs Deep Blue, which beat Garry Kasparov on May 11, 1997.
Deep Blue was a specialized supercomputer capable of evaluating 200 million positions per secondâa level of brute-force calculation no human could match. Kasparov, widely regarded as one of the greatest chess players in history, had previously defeated Deep Blue in 1996. But after an upgrade, the machine came back stronger and won the rematch 3.5â2.5.
The match was a turning point in AI history, marking the first time a machine proved it could outplay the best human in a game once thought to require intuition and creativity. Todayâs AI chess engines, like Stockfish and AlphaZero, make Deep Blue look primitiveâbut in 1997, it was the checkmate heard 'round the world. âď¸
đĽ FF INTEL
Got a Hot Tip or Burning Question?
Weâre all ears. Drop us a note, and weâll feature the best reader insights, questions, and scoops in future editions. Letâs build this thing together.
đľ Hit the button below and spill the tea!
đ¤ CONNECT
Stay in the Know
Thanks for reading todayâs newsletter. See you next time!
The Forward Future Team
đ§âđ đ§âđ đ§âđ đ§âđ
Reply