• Forward Future AI
  • Posts
  • 🧑‍🚀 OpenAI Secures $6.6B, NVIDIA & Google Launch AI Models, Pika’s Video Effects Redefine Creativity

🧑‍🚀 OpenAI Secures $6.6B, NVIDIA & Google Launch AI Models, Pika’s Video Effects Redefine Creativity

OpenAI raises $6.6B to enhance AI research and accessibility, NVIDIA unveils an open-source GPT-4 rival, and Pika Labs introduces groundbreaking video effects. Google intensifies competition with human-like AI, while industry leaders like Microsoft and Liquid AI push new boundaries in AI efficiency and capabilities.

Good morning, it’s Friday! OpenAI is swimming in fresh billions, NVIDIA just dropped a new open-source AI model, and Boston Dynamics' Spot is getting even more high-tech. Meanwhile, AI is being used to uncover ancient aqueducts, and cancer centers are forming an alliance to harness AI for breakthroughs. And Poolside, the AI-coding startup, just raised half a billion dollars. Let’s get into it.

Your Daily Roundup:

  1. OpenAI Secures $6.6B to Boost AI Research and Global Expansion

  2. NVIDIA Unveils Open-Source AI Model to Rival GPT-4

  3. Pika 1.5 Launches with Mind-Bending AI Video Effects

  4. Liquid AI Sets New Efficiency Standards with Cutting-Edge Models

  5. Microsoft Copilot Adds Voice and Vision Features for Enhanced AI

  6. OpenAI Co-Founder Durk Kingma Joins Rival AI Firm Anthropic

  7. Google Ups AI Rivalry with OpenAI, Develops Human-Like Reasoning

👉️ Top AI Stories

OpenAI Raises $6.6B to Accelerate AI Research and Global Accessibility

OpenAI has secured $6.6 billion in new funding, boosting its post-money valuation to $157 billion. This investment will enhance its frontier AI research, expand compute capacity, and further develop tools to make advanced intelligence widely accessible, benefiting over 250 million ChatGPT users and businesses globally. → Continue reading here.

NVIDIA Just Dropped a Bombshell: Its New AI Model Is Open, Massive, and Ready to Rival GPT-4

NVIDIA has released NVLM 1.0, an open-source 72 billion parameter artificial intelligence model that shows exceptional performance in both visual and language tasks, even surpassing its text-only capabilities after multimodal training. The model's release challenges the AI industry's traditional proprietary approach, potentially democratizing access to advanced AI technologies and spurring innovation, while also raising questions about ethical use and future business models for AI. → Continue reading here.

Pika 1.5 Launches with Physics-Defying AI Special Effects

Pika Labs has launched Pika 1.5, an upgraded AI video generation model, introducing advanced features such as "Pikaffects" for creating dynamic, physics-defying video clips from text or images, and enhanced motion control capabilities to produce professional-level cinematic shots. Despite a competitive landscape with rivals focusing on realism, Pika 1.5 offers unique creative tools for both free and paid users, along with community challenges to encourage user engagement and feedback for continuous improvement. → Continue reading here.

Liquid AI Unveils Liquid Foundation Models, Sets New Standards in AI Efficiency

Liquid AI introduced its Liquid Foundation Models (LFMs), a new class of large-scale neural networks, including models with 1B, 3B, and 40B parameters. These models outperform previous transformer-based architectures, with the LFM-1B setting a new state-of-the-art benchmark for its size, the LFM-3B surpassing larger models up to 13B parameters, and the LFM-40B offering efficient deployment with advanced memory and long-context capabilities. → Continue reading here.

Microsoft Enhances Copilot App with OpenAI-Powered Voice and Vision Features

Microsoft has upgraded its AI-powered Copilot app with new features, including a "voice" mode allowing users to interact with the chatbot via speech and a "vision" mode that enables the bot to analyze what's on the user’s screen. These enhancements let Copilot answer visual-based questions, such as providing cooking times based on meal photos, further improving its utility as a ChatGPT competitor. → Continue reading here.

OpenAI Co-Founder Durk Kingma Joins Anthropic

Durk Kingma, a key figure in the development of generative AI models like DALL-E and ChatGPT at OpenAI, has transitioned to a role at Anthropic, an AI company supported by tech giants such as Amazon and Google. Kingma, who holds a Ph.D. in machine learning and has a background that includes founding Google Brain and serving as an angel investor and advisor, joins a growing list of high-profile hires at Anthropic, aligning with its mission to responsibly advance AI technology. → Continue reading here.

Google Deepens Rivalry with OpenAI, Develops AI with Human-Like Reasoning

Google is advancing AI software that mimics human reasoning, positioning itself against OpenAI’s similar efforts. Multiple teams at Google have been developing AI systems capable of solving complex, multistep problems in areas like math and computer programming, intensifying the competition between the two tech giants in the AI race. → Continue reading here.

☝️ Sponsor: Mammouth AI

Mammouth AI LogoLightBrown

Get access to the best LLMs (GPT-o1, Claude 3.5, Llama 3.1, chatGPT-4o, Gemini Pro, Mistral) and the best AI generated images (Flux.1 Pro, Midjourney, SD3, Dall-E) in one place for just $10 per month. Enjoy on mammouth.ai

👾 Forward Future Original

Welcome to the second part of this series. In the first part, we defined the basics of AGI (Artificial General Intelligence) and explained why a common understanding is crucial for the development process - even if there is still no uniform agreement in the scientific community, as many researchers emphasize. ("This paper endeavors to summarize the minimal consensus of the community, consequently providing a justifiable definition of AGI. It is made clear what is known and what is controversial and remains for research, so as to minimize the ambiguous usages as much as possible in future discussions and debates.”).

In this section, we take a look back at the path to AGI by tracing developments to date and at the same time providing an outlook on future progress. The review helps to better understand key milestones, while the outlook shows what challenges still lie ahead.

Let's remember the definitional criteria for AGI that I have developed:

General expert level (1), self-learning (2) and multimodality (3) in conjunction with autonomous action (agentic) seem to have emerged as the essential criteria for AGI in broad reception. That is, the ability of a model to solve problems independently at the human expert level without necessarily having been trained on them beforehand and to draw conclusions for its further action.”

AGI differs from specialized artificial intelligence in that it is capable of solving a variety of tasks that are not explicitly pre-programmed.  

In this section, we take a historical look at the beginnings of modern AI and highlight the key breakthroughs that have brought us closer to this goal. We then analyze the current state of the technology and discuss what conditions must be met in order to realize AGI. I will draw on my personal assessment as well as empirical data and findings to illustrate current progress.

We start with a look at the origins of machine learning and the famous Turing test, which has long been considered one of the most important benchmarks for artificial intelligence. We then look at current data to evaluate technological progress and derive the need for further development. Continue reading here.

🚀 Launches + Funding

Money Rocket

✌️ Sponsor: Langtrace AI

Monitor, Evaluate & Improve Your LLM Apps

Open source LLM application observability, built on OpenTelemetry standards for seamless integration with tools like Grafana, Datadog, and more. Now featuring Agentic Tracing, DSPy-Specific Tracing, & Prompt Debugging Modes, Langtrace helps you manage the lifecycle of your LLM powered applications. Delivering detailed insights into AI agent workflows, helping you evaluate LLM outputs, while tracing agentic frameworks with precision. Star Langtrace on Github!

✍️ Editor Picks

Robotics

Boston Dynamics Enhances Spot's Capabilities with Acoustic Vibration Sensing and Reality Capture

Boston Dynamics has introduced new features for Spot, including acoustic vibration inspection, which detects early signs of bearing failures, and laser scanning for creating digital twins of industrial facilities. These updates improve predictive maintenance and support autonomous inspections over larger areas with multi-docking capabilities. → Watch it here.

Discoveries

AI and Cold War Spy Satellites Uncover Ancient Underground Aqueducts

Archaeologists are using AI to analyze Cold War-era US spy satellite images, helping them locate ancient qanats, underground aqueducts up to 3,000 years old that provided water in arid regions across North Africa and the Middle East. This AI-based method, tested in Afghanistan, Iran, and Morocco, has achieved an 88% success rate in identifying qanat locations, offering a promising tool for rediscovering these vital, ancient water systems. → Continue reading here.

Healthcare

Cancer Centers Launch Cancer AI Alliance to Unlock Discoveries, Transform Care Using Cancer Data and Applied AI

Four leading National Cancer Institute-designated cancer centers have formed the Cancer AI Alliance (CAIA) with backing from tech giants, leveraging artificial intelligence to advance cancer research and care while ensuring data security. CAIA, set to operate by end of 2024, aims to foster collaboration among researchers and institutions, improve insights into rare cancers through shared data without compromising privacy, and aspires to amass $1 billion in resources to support cancer innovation. → Continue reading here.

📽️ New Video:

🧰 AI Toolbox

OutfitAnyone
  • AI Fashion Assistant: OutfitAnyone is an AI tool on Hugging Face that provides personalized fashion suggestions based on user preferences, offering diverse styles for different occasions.

  • AI Tool for Managing Action Items: Spinach consolidates action items into one platform, integrating with product management tools to turn tasks into tickets. Provides users with the necessary context for follow-ups.

  • AI Tools Empowering Artists: Playform offers a suite of AI-driven tools designed to assist artists in their creative process without requiring technical expertise, from sketching to face mixing and NFT creation.

🛰️ Houston, We Have More Headlines!

Help Us Improve

Are you enjoying Forward Future’s newsletter?

Login or Subscribe to participate in polls.

Reply to this email if you have specific feedback to share. We’d love to hear from you.

Stay Connected

Looking for more AI news, tips, and insights? Follow us on X for quick daily updates and bite-sized content.

For in-depth technical analysis, subscribe to the Forward Future YouTube channel. We dive deep into new models, test their performance, explore the latest tools, and share our impressions of AI innovations and developments.

Prefer using an RSS feed? Add Forward Future to your feed here: RSS Link

Thanks for reading this week’s newsletter. See you next time!

🧑‍🚀 Forward Future Team

Reply

or to participate.