• Forward Future AI
  • Posts
  • Microsoft Labels OpenAI as Competitor, Apple Uses Google Chips for AI, and Amazon Unveils New AI Chip

Microsoft Labels OpenAI as Competitor, Apple Uses Google Chips for AI, and Amazon Unveils New AI Chip

Microsoft identifies OpenAI as a rival in AI and search, while Apple taps Google’s custom chips for its AI models. Amazon introduces a powerful AI chip to reduce dependency on Nvidia, and Meta's AI spending boosts advertising growth. Stay updated on these key AI industry shifts and their impact on tech giants.

Good morning, it’s Friday! We’re diving into this week’s biggest AI stories in the latest edition of Forward Future. From Microsoft acknowledging OpenAI as a competitor, to Apple's secret training on Google’s chips 🤫, and Amazon's latest AI chip breaking performance records, we've got it all. Plus, Canva makes another acquisition.

Get your ice coffee with 30 pumps of hazelnut syrup and extra cream and read on!

Other AI stories you need to know:

  • Nvidia and AMD: Nvidia's stock rises following AMD's strong earnings report and record sales of AI chips.

  • Google’s AI Healthcare Push: Google leverages AI to tackle previous healthcare flops with new innovations.

  • Global Chip War: Control over AI-capable data centers is becoming a strategic priority for governments worldwide.

  • Perplexity's Ad Integration: Perplexity, an AI search engine, will start selling ads and compensating publishers for content.

  • Microsoft’s Deepfake Regulation Push: Microsoft urges US lawmakers to enact regulations against deepfakes to protect elections and privacy.

Sponsor

CodiumAI is a quality-first generative AI coding platform, offering developers tools for writing and refactoring, as well as testing and reviewing. Generate confidence, not just code. Try for free: https://bit.ly/3WAvqAq 

  • Microsoft says OpenAI is now a competitor in AI and search - Microsoft has acknowledged OpenAI, its partner and AI models provider, as a competitor in its latest annual report. While Microsoft has invested $13 billion in OpenAI and uses its technology in various products, it has now listed OpenAI alongside major entities like Amazon and Google as a rival, particularly in AI and search advertising realms. This follows OpenAI's unveiling of a search engine prototype, SearchGPT. Despite the competitive atmosphere, both entities maintain that their collaborative relationship remains unchanged, although they'd always anticipated competition. Microsoft has faced internal drama with OpenAI's board changes and the establishment of a new AI unit led by Mustafa Suleyman, co-founder of DeepMind. CEO Satya Nadella continues to foster a close working connection with OpenAI's CEO, Sam Altman.

  • Apple says its AI models were trained on Google's custom chips - Apple has tapped into Google's Tensor Processing Units (TPUs) for the pretraining of their AI system, named Apple Intelligence, as an alternative to Nvidia's dominant GPUs. This move is indicative of how tech giants are exploring various chip options for AI training amidst Nvidia's GPUs scarcity. Although Apple didn't explicitly mention Google or Nvidia, their technical paper disclosed training their Apple Foundation Model (AFM) on Cloud TPU clusters. Apple's release highlights advancements such as enhanced Siri capabilities and potential new AI-powered functions. Meanwhile, Google balances the use of both its TPUs and Nvidia's GPUs within its ecosystem, offering cloud access to both technologies. Despite Google's status as a leading Nvidia customer, Apple's shift reflects a strategic choice for powering their AI infrastructure. Apple's integration of TPUs signifies a broader industry trend where companies are heavily investing in AI technology, acknowledging the risk of lagging behind in this crucial area. Apple's quarterly results are expected soon, following the recent preview of their evolving AI initiatives.

  • Amazon Unveiled the Latest AI Chip, Performance up by 50% - Engineers at Amazon's chip lab in Austin have tested new servers featuring in-house AI chips poised to rival NVIDIA's offerings. These chips aim to reduce Amazon's dependence on NVIDIA for powering AWS AI cloud services, offering cost-effective, powerful computation for customers. Although new to the AI chip scene, Amazon leads in non-AI processing with its fourth-generation Graviton chips. David Brown of AWS cites up to 50% better performance at half NVIDIA's cost. AWS, a significant revenue generator for Amazon, dominates a third of the cloud market. During Prime Day, Amazon deployed 250,000 Graviton and 80,000 custom AI chips to meet increased demand.

  • Canva acquires Leonardo.ai to boost its generative AI efforts - Canva has acquired Leonardo.ai, a generative AI startup, to enhance its AI technology stack. Financial terms were not disclosed, but the deal includes a mix of cash and stock, and all 120 employees of Leonardo.ai will join Canva. Leonardo.ai will continue to operate independently, focusing on rapid innovation while leveraging Canva's resources. The acquisition aims to integrate Leonardo's technology into Canva's Magic Studio suite, enhancing generative AI capabilities for Canva's 180 million monthly users. This move is part of Canva's broader strategy to strengthen its AI-powered workflow and prepare for a potential IPO.

  • Why Is Nvidia Stock Up? AI Chip Shares Jump as Competitor AMD Reports Stellar Earnings Revenue - Nvidia's stock has surged due to the strong earnings report from competitor AMD, which highlighted a robust demand for AI chips. AMD's Instinct MI300 chip achieved record sales, contributing to a 115% year-over-year increase in data center revenue, signaling its growing presence in the AI chip market. This positive performance has bolstered investor confidence in the AI chip sector, lifting stocks of other chipmakers, including Nvidia and TSMC. Additionally, ASML Holding's stock also rose on news that it might receive an export reprieve from the Biden Administration, allowing it to continue selling chip-making equipment to China.

  • Google Taps AI to Revamp Costly Health-Care Push Marred by Flops - Google is leveraging AI to rejuvenate its health-care initiatives after previous attempts, such as glucose-sensing contact lenses and Google Glass for surgery, failed to take off. Despite AI's potential to transform health care, current applications like the nurse handoff tool and transcription app face significant challenges, including critical omissions and reliability issues. While AI aims to streamline administrative tasks and support clinical decisions, health-care providers and patients remain skeptical. Google emphasizes the importance of human oversight to ensure accuracy and safety, striving to gain trust in an industry wary of technological disruptions.

Sponsor

AI Hub by Qualcomm - Run, download, and deploy your optimized models on Snapdragon® and Qualcomm® devices.  Learn more about AI Hub by Qualcomm at https://aihub.qualcomm.com/

  • The global chip war could turn into a cloud war - As AI technology becomes increasingly integral to global economies, control over AI-capable data centers is becoming a strategic priority for governments worldwide. These data centers, filled with high-end chips, are essential for both civilian and military applications. Countries like Saudi Arabia, the UAE, Kazakhstan, and Malaysia are investing heavily in building their own AI infrastructure. US cloud companies see opportunities in these markets but face challenges, such as maintaining control over AI technology while avoiding compromises in international deals, exemplified by Microsoft's partnership with UAE-based G42. The competition in cloud infrastructure is becoming a critical aspect of the tech rivalry between the US and China, with both nations seeking to dominate this strategic resource.

  • Perplexity Will Soon Start Selling Ads Within AI Search - Perplexity, an AI search engine, will begin selling ads within its search results and compensating publishers whose content is used to form answers. Starting this quarter, brands can buy "related follow-up questions" that appear below user queries, labeled as sponsored. Publishers will earn a double-digit percentage of the ad revenue when their content is used, and they'll receive free access to Perplexity’s large language models and Pro service tier for a year. This move comes after criticism for scraping publisher content without permission, leading Perplexity to modify its processes based on feedback from partners like Time and Der Spiegel.

  • Microsoft Pushes US Lawmakers to Crack Down on Deepfakes - Microsoft President Brad Smith has urged Congress to pass laws to address the misuse of AI-generated content, particularly deepfakes, which pose risks to elections, personal security, and privacy. Smith advocates for a "deepfake fraud statute" to prevent cybercriminals from exploiting this technology and for labeling AI-generated content as synthetic. Additionally, he calls for federal and state laws to penalize the creation and distribution of sexually exploitative deepfakes. This push for regulation comes amidst controversies, such as Elon Musk sharing a manipulated video of Vice President Kamala Harris. Congress is considering several bills to regulate deepfake distribution.

  • Ridley Scott Wants to 'Embrace' AI for Post-Production - Nearly a quarter-century after the original "Gladiator" movie, the director's vision for a rhino fight scene is set to become a reality in "Gladiator 2." Advances in technology have made it possible to feasibly include this ambitious sequence. The scene, which had been previously unattainable, promises to be a cinematic milestone, showcasing how far filmmaking techniques have come, especially in terms of special effects and CGI, since the first film's release.

  • Meta's advertising growth is proof that hefty AI spending is already paying off - In a recent earnings call following Meta's Q2 report, CEO Mark Zuckerberg highlighted artificial intelligence's central role in Meta's revenue growth of 22% to $39.07 billion, with advertising as the primary revenue source. Meta's ad growth outperformed Google, Pinterest, and Spotify, with AI enhancements cited as a key factor in improving ad performance and user engagement. The company's advertising gains are attributed to the sectors of online commerce, gaming, and entertainment, especially in Asia-Pacific. Despite heavy investment in AI and metaverse initiatives, Meta has shown a positive return on AI with immediate revenue benefits, while generative AI remains a long-term growth strategy. Meta predicts substantial CapEx growth in 2025 to support AI and product development, raising its 2024 CapEx forecast to between $37 and $40 billion. Analysts express a positive outlook on Meta's AI integration and revenue trajectory, despite ongoing metaverse losses.

  • Reddit CEO says Microsoft needs to pay to search the site - Reddit CEO Steve Huffman has publicly called on Microsoft and other companies to negotiate payment for scraping Reddit data, following precedent agreements with Google and OpenAI. Huffman highlighted unauthorized data use by Microsoft for AI training and content summarization, criticizing companies like Microsoft, Anthropic, and Perplexity for treating internet data as 'freeware.' In response to such activities, Reddit updated its robots.txt to block non-compliant web crawlers, which led to Bing search results excluding Reddit. Huffman asserts that the traditional dynamics of value exchange between search engines and content providers have shifted, as traffic generation no longer suffices as fair compensation. Anthropic affirmed their compliance with Reddit's crawling restrictions, while Microsoft and Perplexity have yet to provide substantial comments on the matter.

  • What Leaders Need To Know About The EU’s AI Act Starting Today - The European Union’s Artificial Intelligence Act (AI Act) takes effect on August 1, 2024, introducing a groundbreaking framework to regulate AI systems within the EU, focused on ethical use, transparency, and fundamental rights. The Act establishes a risk-based classification for AI applications from minimal to unacceptable risk, with corresponding regulatory obligations. Minimal-risk applications like AI in video games have fewer restrictions, while high-risk applications in sectors such as healthcare or law enforcement face stringent requirements, including transparency and security criteria. Unacceptable risk applications are prohibited. Organizations must audit AI systems, categorize them by risk level, and adhere to AI Act standards to avoid penalties. New bodies will oversee Act implementation, offering guidance and ensuring compliance through national competent authorities. The AI Act also opens opportunities for trust and innovation by prioritizing transparency and robust security measures.

  • OpenAI vows to provide the US government early access to its next AI model - OpenAI announced collaboration with the US AI Safety Institute, receiving early access to its upcoming model to advance AI evaluation science. Despite recent controversies concerning safety prioritization and internal conflicts leading to the disbanding of the Superalignment team and key resignations, OpenAI has reasserted its safety commitment. They've pledged 20% of computing resources toward safety initiatives and formed a new safety group, although skepticism remains regarding its effectiveness as it is led by board members, including Sam Altman. The establishment of the AI Safety Institute by NIST follows the UK AI Safety Summit's push for global AI safety standards.

  • TikTok is one of Microsoft’s biggest AI cloud computing customers - TikTok has been reported to pay Microsoft nearly $20 million monthly for access to OpenAI's models, contributing to a significant portion of Microsoft's cloud AI division's revenue, which is on track to hit $1 billion annually. However, TikTok's parent company, ByteDance, may reduce reliance on OpenAI if it successfully develops its large language model (LLM). ByteDance's creation of its LLM using OpenAI's tech was deemed a violation of OpenAI's terms and led to a suspension for investigation. Microsoft is OpenAI's exclusive cloud provider and has invested heavily in a supercomputer for ChatGPT. In its Q4 2024 earnings, Microsoft reported a 29 percent growth in Azure revenue, slightly below projections, with an expected 28-29 percent growth anticipated for Q1 2025.

Awesome Research Papers

  • Apple Intelligence Foundation Language Models - Apple has developed foundation language models for its personal intelligence system, Apple Intelligence, integrated into iOS 18, iPadOS 18, and macOS Sequoia. These models include a 3 billion parameter model optimized for on-device use and a larger server-based model. The report details the architecture, data used, training process, optimization techniques, and evaluation results, emphasizing Apple's commitment to Responsible AI. Key features include advanced text handling, image creation, and in-app actions, with a strong focus on user privacy and avoiding systemic biases.

  • Generative AI in Real-World Workplaces - This report details the latest findings from Microsoft's research on the productivity benefits of LLM-powered tools like Microsoft Copilot. It compiles results from over a dozen studies, including the largest randomized controlled trial on generative AI in workplace environments. The research indicates that generative AI significantly boosts worker productivity, though its impact varies by role, function, and organization, depending on how well these tools are adopted and utilized. The report highlights the potential for AI to further enhance productivity as organizations adapt their work practices to maximize the value of AI tools.

  • Fine-gained Zero-shot Video Sampling - This paper presents a novel algorithm designed to generate high-quality video clips from pre-existing image synthesis models like Stable Diffusion, without further training or optimization. The method overcomes issues of computational demands, large dataset requirements, and loss of image expertise seen in traditional approaches that infuse a temporal dimension into image diffusion models for video generation. The algorithm incorporates a dependency noise model and temporal momentum attention to maintain content consistency and animation coherence, resulting in state-of-the-art performance in zero-shot video generation tasks.

  • Mixture of Nested Expert: Adapted Processing of Visual Tokens - The paper presents "Mixture of Nested Experts (MoNE)," a framework designed to optimize processing efficiency for visual media by leveraging its inherent redundancies. Unlike Vision Transformer (ViT) based models, which lack efficiency, and Mixture of Experts (MoE) networks, which are scalable but large in size, MoNE introduces a nested expert structure that dynamically prioritizes token processing to align with a given computational budget. This approach allows for significant reduction in inference time compute—over two-fold—without sacrificing performance. MoNE's effectiveness is demonstrated on image and video datasets, including ImageNet-21K, Kinetics400, and Something-Something-v2, and it showcases adaptability by performing strongly across variable compute budgets with a singular trained model.

Introducing SAM 2: The next generation of Meta Segment Anything Model for videos and images - Meta has introduced SAM 2, an advanced model for real-time object segmentation in images and videos, offering a significant upgrade to its predecessor. The open-source release under Apache 2.0 license includes SAM 2’s code and model weights, aligning with open science practices. SAM 2 exhibits zero-shot generalization, working effectively on unseen objects or visual domains, facilitating a wide array of applications without the need for specific adaptation. Additionally, Meta has shared the extensive SA-V dataset and a web-based demo for interactive object segmentation.

Google releases Gemma 2 2B, ShieldGemma and Gemma Scope - Google has updated its Gemma 2 AI models, adding three new models to the lineup: Gemma 2 2B, ShieldGemma, and Gemma Scope. Gemma 2 2B is a 2.6B parameter model designed for on-device usage, and ShieldGemma comprises classifiers for content safety across multiple model sizes (2B, 9B, 27B). Gemma Scope involves sparse autoencoders (SAEs) for model interpretability. The update allows for better integration with the Transformers library, and ShieldGemma has shown advanced content moderation capabilities when benchmarked against various datasets. Gemma Scope seeks to provide deeper insights into model activations, akin to a "microscope" for AI models.

Introducing GitHub Models: A new generation of AI Engineers building on GitHub - GitHub Models has been launched to empower more than 100 million developers to become AI engineers by leveraging machine learning models like Llama 3.1, GPT-4o, and Mistral Large 2. With an integrated playground in GitHub, developers can freely test and interact with these models before transition to Codespaces for further development and Azure for production deployment. GitHub prioritizes privacy and security, ensuring no data is shared with model providers. Renowned for transforming development, GitHub aims to democratize AI technologies, even integrating them into Harvard's CS50 course. The release aims to bolster the intersection of open source and AI, increasing generative AI projects on GitHub, like those created by GitHub Copilot, and anticipates a surge in the developer community towards a goal of one billion. The start of the limited public beta invites developers to explore the possibilities of AI within their coding environment.

Hugging Face Offers Developers Inference-as-a-Service Powered by NVIDIA NIM - Hugging Face, a leading AI community with 4 million developers, has teamed up with NVIDIA to provide inference-as-a-service, bolstering easy access to NVIDIA-accelerated inference for popular AI models like Llama 3 and Mistral AI. Announced at SIGGRAPH, this integration allows developers to swiftly prototype and deploy open-source AI models from the Hugging Face Hub. The service is built on NVIDIA NIM microservices and runs on NVIDIA DGX Cloud, offering developers serverless inference, reduced infrastructure burden, and optimized performance. This service complements Hugging Face's existing AI training service and integrates seamlessly with just a few clicks on model cards. NVIDIA's NIM enables more efficient token processing and accelerates AI applications.

Introducing New AI Experiences Across Our Family of Apps and Devices - AI Studio is being introduced in the US, leveraging Llama 3.1 technology, to enable people to create and interact with AI characters. Accessible through ai.meta.com/ai-studio and Instagram, users can build AIs from templates or from scratch for various purposes, such as learning to cook or generating memes. These AIs can be personal or shared publicly across platforms like Instagram and Messenger. Instagram creators can use AI Studio to develop AIs that handle routine interactions with followers, enhancing engagement. Clear labeling ensures transparency in AI communication. Notable creators have already engaged, with safeguards in place to ensure responsible AI usage. AI Studio represents an initial step towards broader creative AI utilization.

How to join the waitlist for Apple Intelligence in iOS 18.1 - Apple has released the first developer beta versions of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, which are compatible with devices supporting Apple Intelligence. Upon installing the update, users are required to join a waitlist to access the AI features—a new strategy adopted by Apple. To join the waitlist, users navigate to the Apple Intelligence & Siri menu within device settings. Early beta installers reported short wait times for access, but waits may increase as more users join. Not all features are included in this beta, with items like ChatGPT integration, Image Playground, and Genmoji notably absent. Access is granted across all devices once Apple Intelligence is enabled on one.

GPT-4o Long Output Alpha Program - OpenAI has introduced an experimental version of GPT-4o, capable of producing up to 64,000 output tokens per request, aimed at unlocking new use cases that benefit from longer completions. Participants in the alpha program can access this feature using the model name gpt-4o-64k-output-alpha. Due to the higher inference costs associated with long completions, the per-token pricing for this model is increased accordingly.

Midjourney Announces V6.1 - Midjourney has released version 6.1, introducing several enhancements for generating images. Key improvements include more coherent images, better overall image quality, and more precise small features. The update also features new upscalers, faster processing times, and improved text accuracy within images. Additionally, a new personalization model adds nuance and accuracy, with versioning support for personalization codes from previous jobs. The `--q 2` mode has been introduced for increased texture at the cost of coherence. Overall, images generated should appear significantly more aesthetically pleasing.

Shutterstock Releases Generative 3D, Getty Images Upgrades Service Powered by NVIDIA - Shutterstock and Getty Images have integrated generative AI into their services to enhance creative workflows, allowing rapid prototyping of 3D assets and customized image generation. Shutterstock's Generative 3D, in commercial beta, produces 3D objects and HDRi backgrounds from text or image prompts, featuring fast previewing and a variety of file formats suitable for editing. Getty Images has upgraded its Generative AI service for faster, higher-quality image creation with advanced controls for image composition, using a new Edify AI model that can be fine-tuned with brand-specific data. Both services utilize NVIDIA's AI technology, including the Edify architecture and NIM inference microservices, scaled with NVIDIA DGX Cloud.

OpenAI releases ChatGPT's hyper-realistic voice to some paying users - OpenAI has begun rolling out ChatGPT’s Advanced Voice Mode, offering hyperrealistic audio responses to a select group of ChatGPT Plus users. This new feature, built on the GPT-4o model, provides quicker, more lifelike conversations by integrating voice recognition, processing, and response generation into one model. Initially unveiled in May 2024, the Advanced Voice Mode had been delayed to improve safety measures and will gradually become available to all Plus users by fall 2024. The rollout features voices created by paid actors, avoiding issues of impersonation and copyright infringement.

Friend's $99 necklace uses AI to help combat loneliness - Avi Schiffmann has introduced Friend, an AI-powered neck-worn device designed to provide companionship and combat loneliness. Priced at $99 and available for preorder, Friend connects to your phone via Bluetooth, constantly listens, and responds with supportive messages. Unlike other AI wearables focused on productivity, Friend aims to offer emotional support, sending proactive messages and providing an AI companion to talk to. Schiffmann, who previously developed a COVID-19 tracking website, pivoted from an initial concept called Tab to create this device, emphasizing user privacy by not storing recordings and allowing deletion of texts.

Amuse 2.0 beta released for easy on-device AI image generation on modern AMD hardware - AMD introduces Amuse 2.0.0 Beta, an AI-powered image generation program designed for modern AMD hardware-based PCs. It requires an AMD Ryzen AI 300-series processor with at least 24GB of RAM or a Radeon RX 7000 system with at least 32GB of memory. Users can generate high-quality AI images, digitize art, and create custom AI filters without an internet connection. A standout feature is the use of AMD XDNA Super Resolution to upscale images, enhancing generation speeds. However, users should be cautious of potential copyright infringement due to the AI's training on datasets that may contain protected content. It operates as beta software, implying that it's still in the experimental phase, with the possibility of unexpected functionality issues.

Alibaba Unveils the World's First AI-Powered Conversational Sourcing Engine - Alibaba International is set to launch a revolutionary AI-powered conversational sourcing engine in September 2024, designed specifically for B2B e-commerce and SMEs. This engine utilizes natural language processing to efficiently match buyers with suppliers based on a database of over a billion product listings. It brings a predictive and intuitive sourcing experience, enabling complex queries and facilitating side-by-side supplier comparisons. This innovation stems from the company's ongoing AI efforts, as seen with the adoption of their "Aidge" AI toolkit by half a million merchants.

Introducing Stable Fast 3D: Rapid 3D Asset Generation From Single Images - Stability AI introduces Stable Fast 3D, an advanced 3D asset generation model that converts a single input image into a detailed 3D asset in just 0.5 seconds. It delivers UV unwrapped meshes, material parameters, albedo colors, and offers optional remeshing, preserving the asset's quality while significantly reducing processing time. Aimed at industries such as gaming, virtual reality, architecture, and retail, Stable Fast 3D enables rapid prototyping and asset creation for multiple applications. The model boasts unmatched speed and quality, with a notable improvement over the previous SV3D model, cutting generation time down from 10 minutes to 0.5 seconds. Its capabilities also extend to creating material parameters and normal maps, and it's accessible via Stability AI API and Stable Assistant chatbot for enhanced user interaction.

Simulate Elastic Objects in Any Representation with NVIDIA Kaolin Library - NVIDIA's Kaolin Library introduces Simplicits, a state-of-the-art technique facilitating physics simulation across various 3D model representations. Traditionally, simulations required well-structured meshes, but Simplicits handles irregular geometries, point clouds, Gaussian Splats, and NeRFs. Kaolin provides dual API levels—advanced for experts and simplified for AI developers. Demonstrations include real-time interaction with a chair model in a Jupyter notebook, realistic object simulation like an apple, and complex muscle movements. Simplicits also manage material variations, as shown in muscle simulations.

LangChain Introduces RAG Me Up Framework - LangChain has unveiled "RAG Me Up," a versatile framework designed to facilitate Retrieval-Augmented Generation (RAG) on personal datasets. The framework includes a lightweight server and multiple UI options to interact with the server, allowing users to implement and manage RAG processes with ease.

Check Out My Other Videos:

Reply

or to participate.