🤔 FRIDAY FACTS
Who coined the term “artificial intelligence?”
Today, AI is all the rage, but the concept has actually been around for decades. One scientist even believed building intelligent machines would be quick and easy. Spoiler: it wasn’t!
Stick around for the surprising origin story below. 👇️
🗞️ YOUR DAILY ROLLUP
Top Stories of the Day
Apple Rolls Out Beta Intelligence Features
Apple’s latest beta introduces iOS, iPadOS, and macOS updates, featuring enhanced Siri, ChatGPT integration, Image Playground for image generation, and Visual Intelligence, with a focus on privacy opt-ins.
U.S. Warns of China AI Threats
The U.S. warns China's AI usage poses global security risks, prompting new U.S. policies to counter misinformation and population control while offering safer AI alternatives for allies.
Qualcomm, Google Partner on AI for Cars
Qualcomm and Google collaborate to integrate generative AI into digital cockpits, enhancing in-car experiences with voice assistants and real-time updates, leveraging Snapdragon and Google Cloud technologies.
X.AI Hiring for AI Agents Initiative
X.AI is recruiting engineers for its Starfleet team to develop advanced AI agents capable of handling complex tasks, focusing on AI-driven autonomy in San Francisco and Palo Alto.
NVIDIA CEO Admits AI Chip Design Flaw
NVIDIA resolved a design flaw in its Blackwell AI chips, causing initial delays and stock concerns. CEO Jensen Huang confirmed full production and strong demand, with significant revenue expected.
White House Urges AI Protection from Adversaries
President Biden's memo mandates U.S. agencies to ensure human oversight in AI decisions, prevent foreign espionage, and establish an AI Safety Institute, though its long-term impact remains uncertain.
Perplexity Defends AI Amid Dow Jones Lawsuit
Perplexity responds to a lawsuit by defending its AI's transparency and source citation, criticizing media resistance to innovation. The company seeks collaboration through revenue-sharing models with publishers.
📢 LEGAL BATTLES
Former OpenAI Staffer Criticizes Company for Copyright Violations
The Recap:
Former OpenAI researcher, Suchir Balaji, has accused the company of violating U.S. copyright law by using protected data in its AI models. In his personal blog, Balaji claims OpenAI's practices harm creators and undermine the internet's economic foundations, calling for regulatory intervention.
Suchir Balaji, who worked on GPT-4 at OpenAI, left the company due to ethical concerns over AI's data usage.
He argues that AI-generated outputs frequently infringe upon copyright, failing to meet "fair use" standards.
OpenAI responded, asserting its reliance on publicly available data and adherence to fair use.
The New York Times, currently suing OpenAI, accuses the company of using millions of its articles without permission.
OpenAI faces legal challenges from artists, authors, newspapers, and coders, including notable figures like Sarah Silverman and George R.R. Martin.
The lawsuits highlight growing concerns over AI’s impact on creative industries and its use of copyrighted material.
Despite public confusion, criticism of AI’s business model is mounting from celebrities, legal experts, and ethicists.
Forward Future Takeaways:
Balaji’s whistleblowing adds to a growing chorus of voices calling for regulation of AI companies. As lawsuits pile up, the future of AI development may hinge on how the courts balance innovation with copyright protections. Tighter regulation could reshape how AI models access and use data, impacting both tech giants and content creators. → Read the full article here.
👾 FORWARD FUTURE ORIGINAL
Exploring the Role of LLMs: From Memory to Retrieval-Augmented Generation
This article is a Forward Future Original by guest author, Ash Stuart.
So far we have explored how AI, language models, in particular, mimic the activities of the human brain. They can do meaning - by way of semantic space, they can generate phrases and sentences through a statistical process. Let’s see how it all fits together for practical use in the real world.
A large language model (LLM), as hinted, is like the human brain. In particular, its ability to generate text comes from having trained it with vast amounts of textual data, in recent cases, practically all that’s good on the Internet (who decides what’s good is another matter!)
So when such a model is released, it reflects the training data fed into it up to that point. But let’s take a step back. Being a neural net, an LLM is, just like the human brain, a combination of knowledge (a type of memory) and reasoning capabilities (using such memory.) → Continue reading here.
📽️ VIDEO
Microsoft Bets on AI Agents, IBM and Meta Push Open-Source Advancements
Recent AI developments include Microsoft's big push for autonomous agents, IBM's open-source models, and Meta's new tools for AI image segmentation. Meanwhile, Anthropics and Stability AI introduced significant updates to their models, and creative tools continue to evolve, offering exciting possibilities for users and developers. Get the full scoop in our latest video! 👇
🤔 FRIDAY FACTS
Who coined the term “artificial intelligence?”
Dear Dall-E, What is the meaning of Darctoth?
The term "artificial intelligence" was coined by computer scientist John McCarthy in 1956. He introduced the term during the Dartmouth Conference, a summer research project at Dartmouth College, which is considered the founding event of AI as a field of study. McCarthy is often referred to as one of the "fathers of AI" for his contributions to the field.
Reply