• Forward Future Daily
  • Posts
  • 🧑‍🚀 Meta’s Book Battle, Military AI Risks & OpenAI’s Social Media Gambit

🧑‍🚀 Meta’s Book Battle, Military AI Risks & OpenAI’s Social Media Gambit

Meta’s copyright fight, military AI risks, OpenAI’s social media plans, Claude 3's upgrades, Google’s Veo 2 launch, and deepfake voice detection concerns.

Good morning, it’s Wednesday. Meta tells authors their life's work is basically training kibble, the Pentagon lets chatbots inch closer to military operations, and OpenAI wants to reinvent social media.

In today’s Forward Future Original: A quick guide to file compatibility (everyone’s favorite topic!) across top models.

Plus: our first community spotlight—Astro, our mascot, gets a 3D makeover.

Read on!

🗞️ YOUR DAILY ROLLUP

Top Stories of the Day

OpenAI Ventures into Social Media

🤳 OpenAI Developing Social Platform to Rival X, Report Says
OpenAI is reportedly developing a social media platform akin to X (formerly Twitter), focusing on integrating ChatGPT’s image generation capabilities within a social feed. Currently in early development, it’s unclear whether this will be a standalone app or integrated into ChatGPT. CEO Sam Altman is seeking external feedback on the project. This move could intensify tensions with Elon Musk, X’s owner and OpenAI co-founder, who left the company in 2018.

🚀 Anthropic Adds Research Agent and Google Workspace Integration
Anthropic has rolled out two new capabilities for its AI assistant Claude: an autonomous research tool and integration with Google Workspace. The Research feature allows Claude to run multi-step queries across documents and the web, surfacing detailed answers with citations—moving it closer to a true AI research assistant. Meanwhile, the Workspace integration lets Claude scan Gmail, Calendar, and Docs to automate workflows like meeting prep and doc retrieval.

🎬 Google Integrates Veo 2 AI Video Generator into Gemini Advanced
Google has rolled out its Veo 2 text-to-video AI model to Gemini Advanced subscribers, enabling users to generate eight-second, 720p video clips with cinematic realism directly from text prompts. These videos can be downloaded as MP4 files or shared directly to platforms like TikTok and YouTube. Each clip includes a SynthID watermark to indicate AI generation. Additionally, Google introduced Whisk Animate, a tool that transforms static images into videos using Veo 2, available through Google Labs for Google One AI Premium subscribers.

🤥 Humans Struggle to Detect AI-Generated Deepfake Voices
A recent study reveals that people are alarmingly ineffective at distinguishing between authentic voices and AI-generated deepfake imitations. Test subjects rated fake clips as genuine nearly half the time, highlighting a growing challenge in digital verification. As AI voice synthesis grows more convincing, the inability to spot deepfakes threatens security and trust. In an age where hearing isn't believing, detection tools are urgently needed.

Enjoying Forward Future? Forward this email to a colleague—
it’s one of the best ways to support us.

🧑‍🚀🧑‍🚀🧑‍🚀🧑‍🚀
—Matt and the Team

⚖️ COPYRIGHT

Meta Staff Called 7 Million Pirated Books “Worthless” as Legal Battle Heats Up

Meta Staff Called Pirated Books “Worthless”

The Recap: Newly unsealed court documents reveal Meta’s internal rationale for using more than 7 million pirated books to train its AI model, Llama. The company’s defense hinges on a claim that individual books hold negligible value as training data—an argument central to its fair use stance. The details emerged in Kadrey et al. v. Meta Platforms, one of several major lawsuits testing how copyright law applies to generative AI.

Highlights:

  • Plaintiffs, including authors like Andrew Sean Greer, Junot DĂ­az, and Sarah Silverman, allege Meta pirated their copyrighted books to train Llama without permission or compensation.

  • Meta argues the use was “highly transformative” and protected under fair use, claiming individual books contributed less than 0.06% to model performance.

  • Internal communications show Meta staff knowingly used datasets from illegal shadow libraries like LibGen and Z-Library, despite recognizing the legal and ethical risks.

  • Researchers described fiction books as “easy to parse” and set aggressive goals to amass long-form content; children’s books like Ramona Quimby, Age 8 were among those scraped.

  • Meta argues that licensing even one book could undercut its fair use defense—let alone millions.

  • Former Meta lawyer Mark Lemley left the case, citing concerns over the company’s culture and tactics, and later argued the law should prioritize AI outputs, not training inputs.

  • Authors Guild surveys show 96% of writers believe consent and compensation should be required for AI training; many fear being replaced or exploited by AI tools trained on their work.

Forward Future Takeaways:
This case underscores a growing clash between AI innovation and creative rights, with courts now forced to determine whether massive-scale ingestion of copyrighted works can be excused under fair use. Meta’s strategy of devaluing individual books raises profound questions about the ethics of commodifying human expression at scale. As generative AI reshapes industries, the outcome of Kadrey et al. could set a legal precedent for how (and whether) authors are protected in an AI-driven economy. → Read the full article here.

👾 FORWARD FUTURE ORIGINAL

State of the File

When I first started integrating AI models into my workflow, one of the most frustrating roadblocks was figuring out which models could handle which file types. After some trial and error (and quite a few error messages), I've compiled this practical guide to help you navigate the file compatibility landscape across today's leading AI assistants.

Looking at the major players—ChatGPT, Claude, Gemini, and Grok—I've noticed significant differences in their file-handling capabilities that can make or break your productivity depending on your needs. → Continue reading here.

🤗 COMMUNITY SPOTLIGHT

Astro Goes 3D

Taking a quick break from our usual AI programming—today we’re spotlighting one of our community members, Brad Luther, who brought our Astro mascot to life… in 3D. → Continue reading here.

🎖️ MILITARY

Generative AI Moves Deeper Into the Pentagon’s Decision Chain

Generative AI Moves Deeper Into the Pentagon

The Recap: The Pentagon is entering a new phase in its adoption of AI, integrating generative models into military operations from surveillance to strategic decision-making. This shift builds on earlier AI efforts but introduces fresh concerns around oversight, information security, and the role of humans in critical decisions. In this analysis, MIT Technology Review’s James O'Donnell outlines three major unresolved challenges facing this AI militarization.

Highlights:

  • Marines in the Pacific have begun using chatbot-style generative AI to analyze surveillance data and identify threats in real time.

  • This marks a shift from the Pentagon’s 2017-era AI efforts (like Project Maven) to large language models capable of conversational analysis and target suggestion.

  • Experts like Heidy Khlaaf (AI Now Institute) question whether “human in the loop” oversight is realistic when AI processes thousands of data points rapidly.

  • Classification challenges arise as AI enables synthesis of unclassified data into potentially sensitive insights—a process known as “classification by compilation.”

  • RAND engineer Chris Mouton warns that current classification systems are ill-equipped to handle the volume and complexity of AI-generated intelligence products.

  • Companies like Palantir and Microsoft are developing AI tools to help automate classification decisions, including using models trained on sensitive data.

  • A Georgetown CSET report notes a rise in AI being used at higher levels of military decision-making, including operational planning, not just data analysis.

Forward Future Takeaways:
The Pentagon’s adoption of generative AI reflects a broader trend of aligning military tech with civilian advancements—but with much higher stakes. As AI begins to inform real-time military decisions and intelligence assessments, longstanding norms around human oversight, secrecy, and accountability are being stress-tested. Can democratic institutions keep up—or will they fall back on reactive oversight? → Read the full article here.

🔬 RESEARCH

“Vegetative Electron Microscopy”: The AI-Spawned Error Haunting Scientific Literature

Vegetative Electron Microscopy

A meaningless phrase—“vegetative electron microscopy”—has quietly embedded itself in scientific papers, thanks to a glitch in AI training data traced to a decades-old scanning error and a mistranslation. First appearing in 1950s digitized journals and later reinforced by misinterpreted Farsi, the term found its way into models like GPT-3 and beyond, becoming a “digital fossil” in the AI knowledge base.

Researchers say these phantom artifacts are almost impossible to remove, raising red flags for scientific integrity as AI tools proliferate across academic publishing. The episode underscores a growing concern: not just what AI knows, but what it gets wrong—permanently. → Read the full blog here.

🛰️ NEWS

What Else is Happening

RLWRLD Secures $14.4M

đź’° RLWRLD Secures $14.4M: The company is developing a cutting-edge foundation model for robotics, aimed at enhancing precision. This funding marks a pivotal step in revolutionizing how robots interact with their environments.

✨ OpenAI Releases GPT-4.1: Despite missing a safety report, the new update promises enhanced performance and creativity — an ambitious move stirring AI industry discussions.

đź“§ Notion Launches New Email App: Notion has introduced a new email app, aiming to integrate seamlessly with its popular productivity tools. Users can now manage emails and tasks in one cohesive platform.

🎨 AI in Paint & Notepad: Microsoft embeds AI in Windows Paint and Notepad, enabling image generation and smart text completion. These iconic apps now have a tech-savvy upgrade.

🦾 Meta Resumes EU AI Training: Meta has restarted using public data to train AI systems in Europe while navigating new regulations—balancing innovation with privacy concerns.

🕹️ NVIDIA Unveils RTX 5060 Series: Starts at $299 The new RTX 5060 and 5060 Ti (8GB/16GB) launch this month with upgraded performance and competitive pricing.

đź§° TOOLBOX

Effortless Accounting, Accelerated Research, and Instant App Building with AI

Ace AI

⚙️ Ace AI: Automate accounting tasks with AI—handle bookkeeping, invoicing, and reporting effortlessly to save time and reduce errors.

📚 Anara AI: Speed up research with AI-powered literature reviews, paper summaries, citation generation, and database integration.

👨‍💻 BuildShip: Instantly build and deploy full-stack web apps with AI—no boilerplate, just code, databases, and APIs tailored to your prompt.

Want to explore more AI tools?
Browse the Forward Future AI Tool Library

📽️ VIDEO

GPT-4.1 is HERE! OpenAI drops the ultimate coding model

OpenAI’s new API-only model crushes coding, nails instructions, and supports 1M tokens—faster, smarter, and cheaper than ever. Get the full scoop in Matt’s latest video! 👇

🗒️ FEEDBACK

Help Us Get Better

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

🤠 THE DAILY BYTE

AI on Spill Patrol: How Robots Sniff Trouble at Sea

That’s a Wrap!

❤️ Love Forward Future? Spread the word & earn rewards! Share your unique referral link with friends and colleagues to unlock exclusive Forward Future perks! 👉 Get your link here.

📢 Want to advertise with Forward Future? Reach 450K+ AI enthusiasts, tech leaders, and decision-makers. Let’s talk—just reply to this email.

📥 Got a hot tip or burning question? Drop us a note! The best reader insights, questions, and scoops may be featured in future editions. Submit here.

🛰️ Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.

Thanks for reading today’s newsletter—see you next time!

The Forward Future Team

🧑‍🚀 đź§‘â€Ťđźš€ đź§‘â€Ťđźš€ đź§‘â€Ťđźš€

Reply

or to participate.