• Forward Future AI
  • Posts
  • Fei-Fei Li's $1B AI Startup, Anthropic's New AI Fund, and Meta's Controversial AI Training Practices

Fei-Fei Li's $1B AI Startup, Anthropic's New AI Fund, and Meta's Controversial AI Training Practices

Fei-Fei Li launches a $1B startup advancing spatial intelligence, while Anthropic partners with Menlo Ventures on a $100M AI fund. Meta faces backlash over AI training with Instagram posts in Latin America, and tech giants are under scrutiny for using YouTube videos to train AI models without consent. Learn how the latest AI developments and controversies shaping the industry.

Fei-Fei Li's World Labs Aims to Revolutionize AI's Grasp of 3D Environments

Fei-Fei Li, a prominent AI leader from Stanford University, has rapidly established a billion-dollar start-up named World Labs within four months. The venture, focused on developing "spatial intelligence" for human-like processing of visual data, has secured substantial funding from investors including Andreessen Horowitz and Radical Ventures, raising about $100 million in its latest round. Li's project aims to achieve significant advancements in AI, enabling machines to understand and navigate three-dimensional spaces, which could revolutionize autonomous systems and real-world interactions. World Labs follows the surge in investor interest in AI start-ups sparked by OpenAI's ChatGPT.

Sponsor

The HP Elitebook 1040 G11 powered by Intel Core Ultra Processor that unlocks AI experiences. Innovative design and real power for real work. Check out the HP Elitebook 1040 G11 powered by Intel Core Ultra Processor today! https://bit.ly/4b5LkH7

  • Anthropic launches $100 million AI fund with Menlo Ventures, ramping up competition with OpenAI - Anthropic, an AI startup, and investor Menlo Ventures are launching a $100 million fund called the Anthology Fund to support early-stage startups. Similar to Apple's partnership with Kleiner Perkins through the iFund, the goal is to encourage the adoption of Anthropic's technology. Menlo Ventures will provide the financial investment, while Anthropic will offer $25,000 in credits for their large language models plus additional resources to the startups. Unlike OpenAI's approach with its own fund, Anthropic isn't seeking equity, focusing instead on establishing a symbiotic relationship where shared learning can enhance its AI offerings.

  • Meta is training its AI with public Instagram posts. Artists in Latin America can’t opt out - On June 2, Latin American artists discovered that Meta had sent a form allowing European users to opt out of their content being used to train AI models, but no such option was provided to users in Latin America. This sparked concerns among artists about the protection of their work and the lack of robust AI regulation and outdated privacy laws in Latin America. Meta's stance is that using publicly available information for AI is a common industry practice. A collective of Spanish-speaking artists demanded fair data collection policies, while Brazil enforced an opt-out option under its privacy laws. Latin American illustrators feel vulnerable and some have considered moving to alternative platforms, though this is not always feasible due to client bases. The situation highlights discrepancies in digital rights and data protection based on geographical location.

  • Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI - In an investigative collaboration with Proof News, it was revealed that tech giants like Anthropic, Nvidia, Apple, and Salesforce are using data from YouTube videos to train their AI models without permission, potentially conflicting with YouTube's rules. Evidence shows that 173,536 YouTube videos from over 48,000 channels, including popular educational channels and content creators like MrBeast and PewDiePie, were used to compile a dataset (YouTube Subtitles). This dataset, part of a larger project called the Pile developed by EleutherAI, also includes contributions from other sources such as the European Parliament and Enron Corporation emails. The Pile has been leveraged by big tech firms, sometimes acknowledging use in research, despite containing biases, slurs, and profanity, raising safety concerns. Creators whose content was utilized express feelings of theft and injustice, stressing the need for compensation, especially as AI models could threaten their livelihoods. This situation leaves ongoing debates regarding fair use, permission, and the ethical implications of using content without consent unresolved.

  • Apple says its OpenELM model doesn't power Apple Intelligence amid YouTube controversy - Apple confirmed that its OpenELM model, an open-source machine learning model released in April, is not utilized in any AI or machine learning features of Apple Intelligence. OpenELM was created for research purposes and contributes to the advancement of open-source language model development. Although the model incorporated a dataset derived from YouTube subtitles, this dataset does not power Apple Intelligence features. Apple has clarified that Apple Intelligence models are trained on licensed and publicly available data, not on the OpenELM model, which is available on their Machine Learning Research website. Furthermore, Apple has no intention of developing new versions of OpenELM. The "YouTube Subtitles" dataset, used by several tech companies, is part of "The Pile" by EleutherAI.

  • Microsoft Investigated by UK Over Ex-Inflection Staff Hires - The UK's Competition and Markets Authority (CMA) has initiated an antitrust probe into Microsoft's investment in Inflection AI, focusing on the hiring of former Inflection employees. This investigation marks the CMA's continued scrutiny of Big Tech's influence in the AI sector, highlighting concerns over potential market control through strategic investments and hiring practices. Microsoft paid Inflection $650 million for AI software licensing and hired much of its staff, including key figures like Mustafa Suleyman and Karén Simonyan. Microsoft maintains that such talent acquisition promotes competition and should not be considered a merger.

  • Internal Disney Communications Leaked Online After Hack - An anonymous hacking group named Nullbulge has leaked data from Disney's internal Slack channels, exposing discussions about ad campaigns, technology, and interview candidates. The leak, motivated by the group's stance on Disney's handling of artist contracts and use of AI, includes files dating back to 2019 and information from thousands of channels. Disney is currently investigating the breach, which highlights growing tensions in the entertainment industry over AI advancements and artist rights. Nullbulge accessed the data through a compromised Disney software development manager's computer.

  • Meta Won't Offer Future Multimodal AI Models in EU - Meta has decided not to offer its upcoming multimodal AI models to European Union customers due to regulatory uncertainties, setting the stage for a potential conflict with EU regulators. The company plans to release a multimodal Llama model soon, but it will be withheld from the EU, although a larger, text-only version of Llama 3 will be available. This decision reflects broader tensions between U.S. tech giants and European regulators, as Meta faces challenges complying with the EU's GDPR while using data from European users for training AI models. Despite similar data protection laws in the UK, Meta does not face the same regulatory issues there.

  • AI Identifies Three Parkinson’s Subtypes - Researchers at Weill Cornell Medicine applied machine learning to classify Parkinson's disease into three subtypes—Inching Pace, Moderate Pace, and Rapid Pace—based on their progression rates. Each subtype is associated with unique genetic and molecular patterns, which could influence tailored treatment strategies. Through deep learning analysis of extensive clinical data, they identified potential markers, including cerebrospinal fluid ratios and brain atrophy specific to each subtype. This research emphasizes the heterogeneity of Parkinson's disease and reinforces the push towards individualized medicine, potentially revolutionizing diagnosis and treatment for patients. The study was published in npj Digital Medicine and involved collaborative efforts across multiple institutions.

  • Fujitsu and Cohere launch strategic partnership and joint development to provide generative AI for enterprises - Fujitsu announced a strategic partnership and investment with security and data privacy-focused enterprise AI company Cohere Inc., to enhance AI capabilities in the Japanese language for enterprises. They will jointly develop a language model named Takane, based on Cohere's advanced LLM, Command R+, to be offered through Fujitsu Kozuchi, with a focus on industries requiring high-security solutions. Fujitsu's AI technologies, including knowledge graph extended RAG and generative AI auditing compliance, will integrate with the Takane model to meet enterprise needs. This partnership aims to drive AI adoption and digital transformation globally while promoting sustainability and trust in society through innovation.

  • Google chief scientist Jeff Dean: AI needs ‘algorithmic breakthroughs,’ and AI is not to blame for brunt of data center emissions increase - Google has acknowledged a 13% increase in emissions from its data centers in 2023, attributed to heightened AI usage. However, Google's chief scientist, Jeff Dean, disputes that AI is the main cause of the growth in emissions. He asserts that Google maintains its goal of using 100% clean energy by 2030, acknowledging that energy sourcing from clean providers will significantly increase Google's carbon-free energy percentage in the future. Dean emphasized Google's focus on efficiency and the careful attribution of AI's role in data center energy consumption. At the Brainstorm Tech conference, Dean also discussed the evolution of AI, mentioning Google's advancements and caution in deploying new AI technologies like Project Astra, which aims to create a "universal AI agent." Additionally, Dean addressed the need for algorithmic breakthroughs beyond scaling data and computing power to enhance the models' factuality and reasoning capabilities.

  • China Puts Power of State Behind AI—and Risks Strangling It - China is leveraging state resources to boost its AI industry, helping companies like Baidu and SenseTime compete with U.S. counterparts. However, stringent government regulations on political content are stifling innovation and imposing heavy compliance burdens on AI developers. The state-driven approach, characterized by extensive subsidies and data compilation, risks inefficiencies and biases, while U.S. export restrictions further limit access to essential technology. As China attempts to develop homegrown solutions and capitalize on its strengths in specific sectors, the balance between control and creativity remains precarious.

  • Trump allies draft AI order to launch ‘Manhattan Projects’ for defense - Former President Donald Trump's allies are creating an AI executive order proposal advocating for "Manhattan Projects" for military technology development and a rollback of regulations deemed burdensome, contrasting with the Biden administration's approach of safety testing AI systems. The plan, observed by The Washington Post, suggests industry-led agencies to vet AI models and enhance cybersecurity against foreign threats. This initiative is part of a shift in political support in Silicon Valley, with tech leaders now endorsing the potential for a second Trump administration due to its perceived support of innovation and skepticism of current regulatory frameworks. The America First Policy Institute, involving ex-Trump officials, emphasizes the draft is not official policy while tech companies with defense contracts, including Anduril, Palantir, and Scale, stand to benefit from increased military AI spending. The Trump campaign, maintaining distance from the plan, stresses that official policy would only come directly from Trump or authorized spokespeople.

  • AI Startup Tied to Fake Biden Robocall Aims to Combat Misuse - ElevenLabs, an AI startup known for its voice cloning technology, is partnering with Reality Defender, a deepfake detection company, to address the misuse of AI during the election year. This collaboration comes after concerns that ElevenLabs' technology was used to create a deepfake audio of President Joe Biden. The partnership will allow Reality Defender to access ElevenLabs’ data and models, enhancing its detection capabilities, while ElevenLabs will use Reality Defender’s tools to strengthen its safeguards. The company has also introduced features to block voices of political figures and verify personal voices to prevent misuse.

  • On AI, New UK Gov’t to Work on ‘Appropriate’ Rules for ‘Most Powerful’ Models and Beef Up Product Safety Powers - The UK's new Labour government plans to develop "appropriate legislation" for the most powerful AI models, but has not committed to an AI bill yet. This approach reflects concerns over regulatory clarity, in contrast to the EU's established AI regulatory framework. Labour's manifesto includes commitments to binding regulations for powerful AI models and a ban on sexually explicit deepfakes. The government also aims to enhance product safety laws to address new technological risks, including AI, and to create a National Data Library and a Regulatory Innovation Office to keep pace with tech developments. Additionally, plans for a Digital Information and Smart Data bill and a Cyber Security and Resilience bill aim to modernize data protection and bolster defenses against cyberattacks on critical services.

Awesome Research Papers

  • SPREADSHEETLLM: Encoding Spreadsheets for Large Language Models - The paper introduces SPREADSHEETLLM, an innovative method designed to enhance large language models' (LLMs) capabilities in understanding and reasoning with spreadsheets. Initially, a basic serialization approach was tested but found impractical due to token constraints. To address this, the authors developed SHEETCOMPRESSOR, an advanced encoding framework with structural-anchor-based

    compression, inverse index translation, and data-format-aware aggregation, improving performance significantly. The fine-tuned model with SHEETCOMPRESSOR achieved a 78.9% F1 score, surpassing existing models by 12.3%, and demonstrated effectiveness in various spreadsheet tasks, including a challenging new spreadsheet QA task.

  • NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window? - NeedleBench, a framework designed to evaluate long-context capabilities of large language models (LLMs), is introduced, featuring tasks with varying lengths and depths to test information retrieval and reasoning. It assesses open-source LLMs on their ability to discern and reason with key information in bilingual long texts. The Ancestral Trace Challenge (ATC) is presented to further challenge LLMs with complex reasoning in lengthy contexts. Results indicate significant potential for LLM improvement in real-world applications, with difficulties observed in complex reasoning tasks.

  • Beyond Aesthetics: Cultural Competence in Text-to-Image Models - The paper presents a framework to assess the cultural competence of Text-to-Image (T2I) models by examining their cultural awareness and diversity. It introduces CUBE, a benchmark specifically designed for this purpose, featuring cultural artifacts from eight countries across various categories such as cuisine, landmarks, and art. CUBE is comprised of two components: CUBE-1K with high-quality prompts assessing cultural awareness, and CUBE-CSpace, a broader dataset for evaluating cultural diversity. The framework incorporates a novel metric, the Vendi score, to gauge cultural diversity. Initial evaluations indicate that current T2I models have significant shortcomings in cultural awareness, highlighting the necessity for improvements in the cultural inclusivity of these technologies.

  • MUSCLE: A Model Update Strategy for Compatible LLM Evolution - Large Language Models (LLMs) are frequently updated, often leading to challenges in user adaptability and compatibility with previously learned model behavior. This problem extends to downstream task models that depend on LLMs and suffer from instance regression when updates occur. The presented research highlights the need for compatibility metrics between model versions and proposes a strategy to reduce inconsistencies during updates. A new training approach is introduced that enables a reduction of incorrect model predictions after updates, with a demonstrated decrease in negative flips by up to 40%. The implications affect both generative and discriminative tasks, aiming for smoother transitions between model versions.

  • Sibyl: Simple yet Effective Agent Framework for Complex Real-world Reasoning - The abstract introduces Sibyl, a framework for enhancing large language model (LLM) agents to better handle complex reasoning tasks. Sibyl integrates a global workspace for knowledge management, inspired by Global Workspace Theory, and employs a multi-agent debate jury for self-refinement, based on the Society of Mind Theory. It emphasizes scalable design through reentrancy and aims for seamless integration into other LLM applications. Experimental results using the GAIA benchmark with GPT-4 show that Sibyl achieves state-of-the-art performance, indicating its potential to solve real-world reasoning problems more effectively than current LLM agents.

  • Prover-Verifier Games Improve Legibility of Language Model Outputs - Researchers at OpenAI have developed Prover-Verifier Games to enhance the legibility of text generated by strong language models, making it easier for weak models and humans to verify. This approach addresses the issue where optimizing models solely for correct answers results in solutions that are harder to understand. By training advanced models to create verifiable solutions, they found that human evaluators could more accurately assess the outputs, reducing errors. This method, balancing correctness and clarity, could be crucial in making AI applications more trustworthy and effective, especially in complex tasks like solving math problems.

Codestral Mamba - Codestral Mamba is a newly released architecture provided by the same creators as the Mixtral family, focusing on code productivity. Unlike Transformer models, it boasts linear time inference and can theoretically handle infinitely long sequences. Free to use and modify under the Apache 2.0 license, Codestral Mamba stands out for its rapid response rate even with extensive inputs, making it ideal as a local code assistant. It showcases in-context retrieval capabilities for sequences up to 256k tokens and measures up to state-of-the-art transformer models. The model can be deployed via the mistral-inference SDK or TensorRT-LLM, with future support in llama.cpp, and the raw weights are downloadable from HuggingFace. Codestral Mamba, hosting over 7 billion parameters, is also testable on la Plateforme, contrasting with the commercial or community licensed Codestral 22B.

MathΣtral - Mathstral is a new contribution to the scientific community, designed to address complex mathematical problems that necessitate multi-step logical reasoning. Developed through collaboration with Project Numina, Mathstral is a derivative of Mistral 7B with a focus on STEM fields, demonstrating a high-performance capacity in benchmarks like MATH (56.6%) and MMLU (63.47%). It shows performance improvement in the latter when compared with Mistral 7B. Emphasizing the performance/speed tradeoff, Mathstral also presents the ability to increase results through inference-time computation and various methodologies. It is an instructed model with available weights on HuggingFace, ready for use or fine-tuning through the provided tools: mistral-inference and mistral-finetune.

Mistral NeMo - Mistral NeMo, a 12B AI model developed through a collaboration with NVIDIA, offers impressive features like a 128k token context window and state-of-the-art capabilities in reasoning, world knowledge, and coding. It stands out for its standard architecture that makes it a seamless replacement for systems using Mistral 7B. The model, licensed under Apache 2.0, supports FP8 inference due to its quantization-aware training. It excels in performance compared to the 9B Gembo 2 and 8B Llama 3 models and boasts strong multilingual capabilities across major languages. Mistral NeMo takes advantage of Tekken, an efficient tokenizer for 100+ languages, particularly enhancing compression for source code and certain languages.

SmolLM - blazingly fast and remarkably powerful - SmolLM is a new series of compact language models with variations at 135M, 360M, and 1.7B parameters, trained on SmolLM-Corpus—a meticulously curated high-quality training dataset. These models are capable of running on local devices, thus enhancing user privacy and reducing operational costs. SmolLM-Corpus features diverse data sources like Cosmopedia v2 (synthetic textbooks and stories), Python-Edu (educational Python code samples), and FineWeb-Edu (educational web samples). Reported evaluations show that SmolLM models outperform competitors in their size categories on benchmarks for common sense reasoning and world knowledge.

GPT-4o Mini: Advancing Cost-Efficient Intelligence - OpenAI has introduced GPT-4o mini, a highly cost-efficient small model designed to broaden AI application accessibility. Scoring 82% on MMLU and outperforming GPT-3.5 Turbo, it is priced at 15 cents per million input tokens and 60 cents per million output tokens. This model supports text and vision inputs and excels in reasoning, math, coding, and multimodal tasks. GPT-4o mini's enhanced legibility and verifiability make it suitable for diverse applications, from customer support to complex data processing. Safety measures, including RLHF and an instruction hierarchy method, ensure reliable and secure outputs. Available via various APIs, GPT-4o mini aims to make AI integration more affordable and efficient for developers globally.

H2O-Danube3 - The H2O-Danube3 project introduces a collection of small language models, specifically the 4-billion-token-trained H2O-Danube3-4B and the 500-million-token-trained H2O-Danube3-500M. These models, primarily utilizing English language tokens, underwent a three-stage pre-training process on high-quality web data, followed by a supervised tuning specific to conversational contexts. They demonstrate strong performance on various benchmarks, including academic, conversational, and fine-tuning tasks. Remarkably, the streamlined design of H2O-Danube3 allows for efficient operation on smartphones, offering quick on-device inference.

Introducing Eureka Labs - Eureka Labs is creating an AI-native educational platform aiming to revolutionize how individuals learn. Recognizing the scarcity of expert educators who are both knowledgeable and accessible, the company envisions a symbiotic relationship between teachers and AI Teaching Assistants to facilitate learning on a massive scale. Their inaugural offering, LLM101n, is an undergraduate course focused on training AI, mirroring the capabilities of the AI Teaching Assistant. Eureka Labs is currently developing this course which will be offered online, with plans for both virtual and in-person cohorts. Their ultimate goal is to harness AI to enhance human potential and make comprehensive education universally accessible.

Llamafile: bringing AI to the masses with fast CPU inference - Llamafile, an open-source project by Mozilla, aims to democratize AI by enabling fast CPU inference, transforming weights into executable programs that run on any operating system without installation. This approach reduces dependency on expensive GPUs and enhances performance, making AI more accessible and efficient on various hardware, from Raspberry Pi to high-end CPUs. Llamafile ensures privacy and security by running locally without network access and supports multiple operating systems through a unique embedding method. Supported by significant community contributions, Llamafile fosters widespread use and collaboration, aligning with Mozilla's mission to promote open-source AI development through initiatives like the Mozilla Builders program.

Meet Einstein Service Agent: Salesforce's Autonomous AI Agent to Revolutionize Chatbot Experiences - Salesforce's Einstein Service Agent is an advanced AI agent designed to transform customer service efficiency. Unlike traditional chatbots that require explicit programming for specific scenarios, this AI agent leverages generative AI and Salesforce's trusted CRM data to autonomously address a wide array of service issues in natural language. It's built on the Einstein 1 Platform and is capable of interpreting the full context of customer inquiries to provide tailored responses and take appropriate actions. The service agent operates 24/7, offering swift issue resolutions while incorporating built-in guardrails for privacy and security.

Tinder's AI Photo Selector automatically picks the best photos for your dating profile - Tinder has launched its AI Photo Selector feature, which uses facial detection technology to help users choose the best photos for their dating profiles. Now available in the U.S. and rolling out internationally later this summer, the feature selects ten photos based on lighting, composition, and Tinder's insights into effective profile images. It aims to save users time and reduce uncertainty in photo selection. According to a Tinder survey, 68% of participants found an AI photo selection feature helpful. Tinder CEO emphasized the AI assists rather than makes decisions for users, aligning with the company's commitment to creating safe and authentic connections.

Google brings AI agent platform Project Oscar open source - Google introduces Project Oscar, an open-source platform aimed at aiding software development teams in managing and monitoring issues within their programs. Announced at Google I/O Bengaluru, this initiative offers developers the ability to create AI agents that assist throughout the software lifecycle—without the need for recoding—to handle tasks like issue tracking and engagement with contributors. Currently focused on open-source projects, there is potential for future expansion to closed-source projects. A deployed example on the Go language project shows AI agents enriching reports and clarifying issues autonomously. Project Oscar represents Google's broader goal to enhance developer productivity by integrating AI into various stages of the development process.

Flow Studio - The First text-to-movie platform. Turn any idea into a 3 minute video.

exo-explore/exo: Run your own AI cluster at home with everyday devices - exo, maintained by exo labs, provides a method to unify various devices into an AI cluster without the need for high-end GPUs. It supports common AI models, such as LLaMA, and employs a peer-to-peer approach for distributed model inference, rather than using a master-worker configuration. Devices automatically detect one group, and exo intelligently partitions work based on device memory and network.

SciCode: A Research Coding Benchmark Curated by Scientists - SciCode is a benchmark created to test language models on their ability to generate code for solving real scientific problems. It includes 338 subproblems across 16 subdomains from Physics, Math, Material Science, Biology, and Chemistry, aiming to replicate a scientist's workflow of transforming knowledge into computational code. Despite its comprehensive coverage, the best model, Claude3.5-Sonnet, solved only 4.6% of the problems in a realistic setting. SciCode challenges models with tasks such as numerical methods, simulation, and scientific calculations, requiring deep understanding and reasoning. Its evaluation incorporates domain-specific test cases, drawn from scripts used in scientific publications and includes problems associated with Nobel Prize-winning research.

Ollama-Workbench - A comprehensive platform for managing, testing, and leveraging Ollama AI models with advanced features for customization, workflow automation, and collaborative development.

Check Out My Other Videos:

Claude

Reply

or to participate.