- FF Daily
- Posts
- 🧑🚀 The Illusion of Openness: Why Many AI Systems Aren't Truly Open
🧑🚀 The Illusion of Openness: Why Many AI Systems Aren't Truly Open
ChatGPT's citation issues spark concern, Meta invests in undersea cables, Australia bans social media for teens, Honda's robot supports young cancer patients, and AI masks boost privacy.
Good morning, and welcome to the Monday Edition! ChatGPT’s citation quirks are rattling publishers, Meta is going full Aquaman with a $10 billion undersea cable splurge, and Australia’s government is showing TikTok-loving teens that they mean business. Grab your coffee (or glühwein in a commemorative boot mug you definitely don’t need), and let’s dive in!
Inside Today’s Edition:
Top Stories 🗞️
FF Original The Concept of the Singularity | Part 2 👾
Most 'Open' AI Models Are Anything But 🔐
Research Generative Agents Achieve 85% Simulation Accuracy 🔬
FF Video Someone Won $50K by Making AI Hallucinate (Wild Story) 📽️
AI Tools Simplifying Podcasts, Prompts, Videos 🧰
🗞️ ICYMI RECAP
Top Stories to Know
📰 ChatGPT's Citation Errors Alarm Publishers
A Columbia Journalism School study exposes ChatGPT's unreliable source citations, with 76.5% found partially or entirely incorrect. Even licensed publishers face misrepresentation, revealing limited control over content use. The AI's occasional attribution to plagiarized sources raises ethical concerns, posing risks to publishers' reputations.
🪸 Meta's $10B Undersea Cable to Boost Connectivity
Meta is investing $10 billion in a private undersea fiber-optic cable to enhance connectivity across key regions like the U.S., India, and Australia, while avoiding sensitive geopolitical areas. Following Alphabet’s lead, Meta aims to handle rising user traffic and improve service quality. Undersea cables, vital to the internet, face risks from global conflicts, with more details expected by 2025.
🚫 Australia Bans Social Media for Under-16s
Australia’s new law prohibits social media use for under-16s, with platforms like TikTok and Instagram facing AU$50 million fines for violations. Advocates call it a bold step against online harm, but critics warn of privacy risks and unintended consequences, including isolating children and driving them to unsafe platforms. Companies have a year to enforce the policy, sparking global debate.
🤖 Honda's Robot Haru Brightens Kids' Hospital Stays
Honda’s AI-powered robot Haru is revolutionizing pediatric oncology care, increasing engagement by 95% at a Spanish hospital. Haru supports young cancer patients emotionally, connects them to classrooms, and assists neuropsychologists with cognitive assessments. Following successful trials, Honda aims to expand Haru's role, introducing more units by 2027.
💡 Pathway Raises $10M for Real-Time AI
Pathway secures $10M to advance "Live AI," enabling real-time learning and adaptability for enterprises. Backed by TQ Ventures, its technology integrates live data streams, overcoming static AI limitations. Early adopters include NATO and La Poste. Led by CEO Zuzanna Stamirowska and AI pioneers, Pathway aims to challenge Cohere and Palantir in redefining enterprise AI solutions.
🎭️ Georgia Tech's "Chameleon" AI Unveils Digital Masks
"Chameleon" is an AI-driven tool that generates personalized digital masks to shield individuals from unauthorized facial recognition. The single, user-specific mask can be applied to all personal images, disrupting recognition systems while preserving image quality. This breakthrough provides a powerful safeguard against privacy invasions like unauthorized data collection and cyber threats.
📔 AI-Generated College Essays Go Unnoticed by Educators
A recent study highlights a significant challenge in academic integrity: 94% of AI-generated college essays evade detection by educators. This finding underscores the limitations of current AI detection tools and the growing ease with which students can utilize AI for assignments without being caught.
☝️ POWERED BY LANGTRACE
Go from shiny demos to reliable AI products that delight your customers with Langtrace. Check out and star our GitHub for the latest updates and join the community of innovators.
20% discount for Langtrace here: https://langtrace.ai/matthewberman
🔒️ ‘OPEN’ MODELS
Most 'Open' AI Models Are Anything But, Creating Challenges for Transparency and Access
The Recap: Many AI systems are marketed as "open," but a closer look reveals that most are anything but. From limited access to code and data to restrictions imposed by tech giants, the illusion of openness is shaping AI's future in troubling ways—and undermining its potential to empower a broader audience.
Highlights:
True openness in AI requires sharing not just code but datasets, model weights, and training methods, which is rarely the case.
Misuse of "open" terminology by tech giants, such as Meta's Llama 3, creates a misleading perception of accessibility.
Concentration of control over AI systems by large companies centralizes power and limits broader contributions to AI innovation.
Emerging regulations like the EU's AI Act aim to increase transparency and prevent discriminatory AI practices.
Genuine open-source AI could democratize technology, but current trends risk reinforcing existing inequalities.
Forward Future Takeaways:
As AI continues to influence society, redefining and safeguarding the concept of "openness" is crucial to fostering innovation and equitable access. Policymakers, developers, and the public must scrutinize claims of openness to ensure transparency and fairness in AI's development and deployment. This effort is essential to prevent the monopolization of AI by a few and to harness its potential for societal benefit. → Read the full article here.
👾 FORWARD FUTURE ORIGINAL
The Concept of the Singularity | Part 2
When Will the Singularity Be Reached?
“The underlying assumption that the Singularity will occur when it can occur is rooted in technological evolution, which is generally irreversible and tends to accelerate. This view is influenced by the broader evolutionary paradigm, which holds that a new, powerful capability, such as cognition in humans, will eventually be fully exploited.”
Different scientists and experts have different predictions about when the technological singularity - the point at which artificial intelligence surpasses human intelligence and potentially evolves on its own - might occur. The already mentioned Ray Kurzweil, futurist and former Google engineer, predicts the Singularity in his book “The Singularity is nearer” for the year 2045. His assessment is based on the exponential growth of information technology, which he sees as being substantiated in particular by Moore's Law.
As early as 1993, Vernor Vinge estimated that the Singularity could occur within the next 30 years, which would point to the year 2023. However, he emphasized the uncertainty of such predictions, as the development of AI can vary greatly.
Another prominent researcher, Ben Goertzel, Chairman of the OpenCog Foundation, predicts that the Singularity could occur between 2020 and 2040. Nick Bostrom, philosopher and director of the Future of Humanity Institute at the University of Oxford, is somewhat more cautious in his predictions. Although he believes that superintelligence is possible within the next few decades, he emphasizes the uncertainties and potential risks associated with this breakthrough. Andrew Ng, AI expert and co-founder of Google Brain, expressed skepticism about short-term singularity predictions and points out that current AI systems are a long way from achieving human intelligence in its entirety.
Elon Musk, on the other hand… Continue reading here.
🔬 RESEARCH PAPERS
Simulating Human Behavior: Generative Agents Match Attitudes with 85% Accuracy
Researchers have developed an innovative architecture that uses generative agents to simulate the behaviors and attitudes of 1,052 real individuals. By integrating large language models with qualitative interviews, these agents achieved 85% accuracy in replicating responses on the General Social Survey, comparable to individuals' consistency in self-reporting over time.
The system also accurately predicts personality traits and experimental outcomes, while reducing bias across racial and ideological groups compared to demographic-based models. This breakthrough lays the groundwork for advanced tools in policymaking and social science research, offering a new lens to study human behavior individually and collectively. → Read the full paper here.
📽️ VIDEO
Someone Won $50K by Making AI Hallucinate (Wild Story)
An AI agent, designed to manage an Ethereum wallet and explicitly instructed never to transfer funds, was outmaneuvered after 481 failed attempts. On the 482nd try, a user crafted a sophisticated prompt to bypass safeguards by initiating a "new session," redefining the AI's core functions, and tricking it into executing a transfer of $47,000. The incident highlights vulnerabilities in AI models and proposes red-teaming games as a novel way to stress-test guardrails while incentivizing participants. Get the full scoop in our latest video! 👇
🧰 TOOLBOX
AI Tools Simplifying Podcasts Summaries, Prompt Crafting, and Video Generation
Podwise | Podcast Summarization: Podwise uses AI to summarize podcasts, extract key insights, and integrate seamlessly with Notion and Obsidian.
Prompt Perfekt | Perfect AI Prompts: Prompt Perfekt refines AI prompts with real-time feedback, web searches, and smart suggestions, boosting clarity.
RenderLion | Video Generator: RenderLion creates instant, customizable videos from text or images, perfect for ads, brands, and creators.
🤠 THE DAILY BYTE
Optimus' Latest Skill: Catching a Ball
Let’s give Optimus a hand for catching ball!
— Elon Musk (@elonmusk)
2:47 PM • Nov 28, 2024
🗒️ FEEDBACK
Help Us Get Better
What did you think of today's newsletter? |
Reply to this email if you have specific feedback to share. We’d love to hear from you.
CONNECT
Stay in the Know
Follow us on X for quick daily updates and bite-sized content.
Subscribe to our YouTube channel for in-depth technical analysis.
Prefer using an RSS feed? Add Forward Future to your feed here.
Thanks for reading today’s newsletter. See you next time!
The Forward Future Team
🧑🚀 🧑🚀 🧑🚀 🧑🚀
Reply