• Forward Future AI
  • Posts
  • SoftBank’s $10 Billion AI Strategy, Apple-OpenAI Collaboration, and Nintendo’s AI Stand

SoftBank’s $10 Billion AI Strategy, Apple-OpenAI Collaboration, and Nintendo’s AI Stand

SoftBank invests $10 billion in AI chips and power projects, Apple strengthens ties with OpenAI, and Nintendo resists generative AI in gaming. Explore the latest moves in the AI race, including Meta’s metaverse plans and Google’s AI-fueled energy challenges.

SoftBank’s Evolving AI Strategy

SoftBank CEO Masayoshi Son plans to invest over $10 billion in energy projects and AI chips to bolster the company's position in the AI industry. SoftBank is negotiating loans to fund energy initiatives, crucial for powering AI data centers, and exploring ways to secure Nvidia GPUs through a special purpose company to keep debt off its balance sheet. While Son has avoided investments in generative AI startups (blocking investment in Mistral AI) to maintain relationships with key partners like OpenAI, SoftBank focuses on infrastructure critical for AI development. This strategy includes building a large AI data center in Osaka and leveraging its UK-based chip designer, Arm, to meet growing AI chip demands.

  • Apple Poised to Get OpenAI Board Observer Role as Part of AI Pact - Apple Inc. will secure an observer role on OpenAI's board, with Phil Schiller, Apple's former marketing chief, chosen for the position. This move strengthens the partnership between Apple and OpenAI, following Apple's announcement to integrate ChatGPT into its devices. The board observer role, also held by Microsoft, will allow Apple to gain insights into OpenAI's decision-making processes without having voting rights. This partnership aligns with Apple's broader AI strategy, which includes enhancing its in-house AI capabilities and integrating popular AI technologies into its products.

  • Nintendo Has No Plans to Use Generative AI in Its Games, Company President Says - Nintendo's President, Shuntaro Furukawa, discussed the company's stance on generative AI and its application in gaming. While acknowledging AI's long-standing role in game development, particularly for controlling non-player character behavior, he conveyed Nintendo's reservations about using generative AI in its titles, citing intellectual property concerns. Furukawa emphasized Nintendo's commitment to providing unique experiences rooted in its extensive know-how, separate from what technology alone can offer. Meanwhile, the industry at large remains cautious, with some movement from major players like Microsoft, who are exploring AI for dialogue and narrative tools, and testing AI for Xbox customer support.

  • Anthropic looks to fund a new, more comprehensive generation of AI benchmarks - Anthropic is launching a program to fund the development of new benchmarks to evaluate AI models, including its own Claude, with a focus on AI security and societal impact. The program aims to create evaluations that test advanced AI capabilities, such as cyberattacks and misinformation, and support research in areas like multilingual conversations and bias mitigation. Anthropic intends to provide funding and infrastructure for these evaluations, despite potential skepticism about the company's commercial motivations. The initiative aspires to set industry standards for comprehensive AI evaluation.

  • Meta is against California’s AI bill. Here’s why - California lawmakers are considering a bill to regulate large AI systems by requiring testing and safety measures, which aims to prevent potential future risks like grid manipulation or chemical weapon creation. The bill, which would apply to AI systems costing over $100 million to train, faces strong opposition from tech giants like Meta and Google, who argue it could stifle innovation and discourage open-source development. Proponents, including Democratic state Sen. Scott Wiener, argue the bill is necessary to prevent catastrophic harms, while critics suggest waiting for federal guidelines and warn the bill could drive companies out of state.

  •  Brazil suspends Meta from using Instagram posts to train AI - Brazil has halted Meta's use of local Instagram and Facebook data for AI training, weeks after Meta paused similar practices in the UK and EU due to regulatory scrutiny. The Brazilian data protection agency ANPD cited potential harm to fundamental rights, challenging Meta's privacy policy and threatening a daily fine if the policy isn't revised within five days. Meta, facing the loss of a substantial user base—102 million on Facebook and 113 million on Instagram—is "disappointed," citing innovation setbacks. The discrepancy in data protection for Brazilian users, particularly for minors, compared to European standards, drew criticism, with Brazilian users facing more complex opt-out processes. The company's response to these specific criticisms is pending.

  • Pixel 9 to ship with 'Google AI' powering 'Studio' and Recall-like screenshot analysis - The upcoming Pixel 9 series looks to expand its AI capabilities with a new "Google AI" suite, incorporating next-gen "LLMs" (likely large language models) and "Gemini" functions. The devices are rumored to feature an exclusive AI assistant, "Pixie," that leverages Google's ecosystem for personalized interactions. Highlights include a "Google AI" branded set of tools, a Microsoft Recall-like feature for memory assistance, an "Add me" camera function for inclusive group photos, and a creative module called "Studio" that appears to leverage generative AI for content creation. Another innovative feature, "Pixel Screenshots," will enable AI-based searches and summarization of screenshots while ensuring privacy with on-device processing and opt-in settings. The reveal of these features is anticipated at a Google event on August 13.

  • Figma disables its AI design feature that appeared to be ripping off Apple's Weather app - Figma has temporarily disabled its "Make Design" AI feature after it was found to be reproducing designs similar to Apple's Weather app. The issue, first identified by Andy Allen of NotBoring Software, led to accusations that Figma's AI was heavily trained on existing app designs, a claim CEO Dylan Field denies. The feature, which generates UI layouts from text prompts, will be disabled until Figma completes a thorough quality assurance process. Field acknowledged the oversight and emphasized the need for better QA to avoid such issues in the future.

  • A.I. ‘Friend’ for Public School Students Falls Flat - The Los Angeles public schools intended to integrate an A.I. platform named Ed, designed as an "educational friend" for 500,000 students, which would aid in academic guidance and emotional support. The platform, also intended to inform parents about attendance and test scores, was celebrated by the district's superintendent, Alberto Carvalho, as a game-changer for democratizing education. However, within two months of its promotion, financial troubles led to the furlough of most of the staff at AllHere, the start-up behind Ed. The incident highlights the precariousness of investing in AI technologies, which face challenges such as student privacy, data accuracy, and the pressure to reduce screen time. Educational technology experts advocate a cautious and critical approach towards adopting such AI tools in schools.

  • OpenAI’s ChatGPT Mac app was storing conversations in plain text - The ChatGPT app for macOS had a security flaw that allowed users' conversations to be easily accessed in plain text. Pedro José Pereira Vieito demonstrated the vulnerability, which could be exploited by malicious actors. OpenAI was informed of the issue, and in response, they released an update that encrypts the stored conversations, mitigating the risk. The discovery of the issue was linked to the app's lack of sandboxing protections, which are not required since the app is distributed through OpenAI's own website rather than the Mac App Store. OpenAI occasionally reviews chats for safety and training purposes, but the security risk did not extend to the broader exposure of user data.

  • Meta plans to bring generative AI to metaverse games - Meta is integrating generative AI into VR, AR, and mixed reality games to enhance its metaverse strategy, focusing on Horizon and potentially expanding to other platforms. The company aims to develop games with non-deterministic paths and tools to improve game development efficiency. This initiative follows Meta's struggles to popularize its Horizon platform and its pivot to allow third-party licensing of Quest software features. Despite significant investments in generative AI, including the Builder Bot prototype and new AI tools, Meta CEO Mark Zuckerberg cautions that monetizing these advancements will take years.

  • How Big Tech is swallowing the AI industry - In the evolving tech industry, Big Tech companies are finding new ways to integrate AI startups into their operations without triggering antitrust scrutiny. This trend was highlighted by Microsoft's strategic hire of Inflection's team and licensing of its AI technology—an approach Reid Hoffman believes will set a precedent. Recently, Amazon mirrored this by hiring roughly two-thirds of Adept’s personnel and securing a deal to license its AI tech, revealing a pattern of 'reverse acquihires' where actual acquisitions are masked by employment and licensing agreements. This strategy allows giants like Amazon and Microsoft to bolster their AI capabilities while circumventing regulatory roadblocks, pushing smaller competitors like Adept—struggling financially despite significant funding—towards industry consolidation.

  • Google falling short of important climate target, cites electricity needs of AI - Google's emissions rose by 13% in 2023, marking a 48% increase since its 2019 baseline, despite its commitment to achieve net zero emissions by 2030. The surge is attributed to its energy-intensive data centers, necessary for AI demands. These centers rely on electricity that is often generated from fossil fuels, contributing to greenhouse gas emissions. While Google has improved its renewable energy usage to 64% for its centers and offices, the company grapples with balancing AI's benefits, such as predicting floods and optimizing traffic, against its environmental impact. Experts urge responsible AI usage and investment in clean energy sources to counteract the rising demand for data center power, which is expected to double by 2026. Google's sustainability efforts are regarded as ambitious and transparent, yet there is a call for more proactive measures to expedite the shift to clean energy.

  • What happened to the artificial-intelligence revolution? - Silicon Valley companies are investing heavily in AI, with a combined estimated budget of $400bn for capital expenditures and R&D. This staggering investment has led to a $2trn increase in market value for these companies, yet the real-world revenue from AI products, like generative AI, is still far from the projected figures. Outside the West Coast tech bubble, AI's impact on productivity and economics appears to be minimal, casting a shadow over the optimistic projections associated with the technology.

  • Runway, an AI Video Startup, in Talks With General Atlantic for $4 Billion–Valuation Fundraise - Runway, a leading AI video startup, is negotiating with investors, including General Atlantic, to raise $450 million at a valuation of approximately $4 billion. This funding round would significantly enhance Runway's financial lead over competitors in the AI-generated video sector. The company, which previously raised over $230 million and was valued at $1.5 billion in June 2023, aims to capitalize on growing investor interest in AI's potential to transform the entertainment industry. Runway's subscription model has driven its annual recurring revenue to about $25 million by the end of last year, although it faces stiff competition from companies like OpenAI and Google.

  • The Underground Network Sneaking Nvidia Chips Into China - Despite U.S. export controls restricting Nvidia’s advanced AI chips from being sold to China, an underground network of buyers, sellers, and couriers has emerged to circumvent these regulations. This network includes Chinese students transporting chips in their luggage and distributors advertising Nvidia chips online. These chips are crucial for AI development in China, which seeks to stay competitive in the tech race against the U.S. The scale of this informal market is relatively small but significant for AI startups and research institutions. Nvidia, Dell, and Super Micro comply with U.S. export controls, but tracking these transactions remains challenging due to supply chain complexities and varying international regulations.

  • UN adopts Chinese resolution with US support on closing the gap in access to artificial intelligence - The U.N. General Assembly adopted two significant, though non-binding, resolutions concerning artificial intelligence (AI). One, a Chinese-sponsored resolution, backed by the U.S., calls for wealthy nations to help reduce the disparity between developed and developing countries in accessing and benefiting from AI. Another, initiated by the U.S. and supported by China among other countries, emphasizes the need for AI to be safe, trustworthy, and universally beneficial. Both resolutions highlight consensus on international cooperation in AI governance, despite Sino-American rivalry in other domains.

Awesome Research Papers

  • Meta 3D Gen - Meta 3D Gen (3DGen) emerges as a groundbreaking technology in computer vision, with capabilities to convert text to 3D assets swiftly, boasting both high fidelity to the input prompts and top-tier 3D quality. Featuring physically-based rendering, it caters to real-world relighting demands and extends its utility with generative retexturing for existing 3D models. 3DGen merges Meta 3D AssetGen and Meta 3D TextureGen, formidable for text-to-3D and text-to-texture generation. This synthesis yields detailed representations across view, volumetric, and texture spaces. Superior to industry standards, 3DGen registers a 68% win rate over single-stage models and excels in speed, accurate prompt response, and complex image quality.

  • AI Agents That Matter - Research on AI agents has underscored the limitations in current benchmarking and evaluation methods. Critiques include an overemphasis on accuracy over other performance metrics, leading to unnecessarily complex agents. The study suggests the importance of optimizing both cost and accuracy. Additionally, the conflation of needs between model and end-user developers complicates the selection of suitable agents for specific applications. Furthermore, many benchmarks suffer from inadequate hold-out sets, resulting in agents that overfit and lack robustness. The absence of standardized evaluation practices exacerbates issues of reproducibility. To combat these issues, the researchers advocate for a principled framework that emphasizes the development of agents effective in practical scenarios, not just on benchmarks.

Massively MultimodalMasked Modeling - Live Demo - The 4M model facilitates flexible multimodal generation by enabling the creation of any given modality from any other subset of modalities. This is achieved without the need for loss-balancing or structural changes usually required in multitask learning. The model's any-to-any generation capability allows for the generation of consistent and interlinked modalities, even with partial inputs, facilitating multifaceted generation and editing tasks. Demonstrations showcased on the site reveal that 4M can handle complex tasks like generating RGB images from captions and bounding boxes, and can adapt to unusual inputs, such as interpreting an out-of-place bounding box creatively. It allows for granular control over the generative process, including compositional generation where the influence of different conditions on the output can be precisely adjusted. For detailed visualization, the website recommends desktop viewing and provides interactive elements to enhance user engagement with the technology.

Moshi - Moshi is an innovative conversational AI designed for short, engaging interactions, with conversations capped at five minutes. It simultaneously processes and responds, allowing for a seamless and dynamic exchange. This AI was created by Kyutai, emphasizing a continuous, fluid conversation with users.

 A new initiative for developing third-party model evaluations - Anthropic announces an initiative to fund the development of third-party AI evaluations, focusing on assessing AI safety levels, advanced capabilities, and risks. Priority areas include cybersecurity, Chemical, Biological, Radiological, Nuclear (CBRn) risks, model autonomy, and broader national security concerns. Advanced metrics for scientific progress and societal impacts, along with infrastructures like no-code platforms, are also of interest. The initiative stresses the importance of evaluations being difficult, not in training data, efficient, high-volume, diverse in format, with expert baselines, and reflective of real-world risk scenarios.

Cloudfare Launches a Tool to Combat AI Bots - Cloudflare has introduced a free tool aimed at preventing AI bots from scraping data from websites hosted on its platform. This tool addresses the issue of AI vendors, like Google, OpenAI, and Apple, who may not always respect the robots.txt file that restricts bot access. Cloudflare's new tool uses advanced bot detection models that identify bots trying to evade detection by mimicking human web activity. The company also encourages website hosts to report suspected AI bots and will manually blacklist these bots over time. This initiative comes in response to the growing demand for training data fueled by the generative AI boom and the need for website owners to protect their content from unauthorized scraping.

Perplexity upgrades its Pro Search - Perplexity has launched an upgraded version of its Pro Search AI research assistant, which now features advanced capabilities for in-depth research and interactive user experiences. This enhanced tool uses the latest AI models, including Claude 3, which provides more accurate and comprehensive answers. The platform supports file uploads in various formats, enabling users to receive detailed analyses and summaries.

ElevenLabs launches Iconic Voices - Listen to your favorite books and articles voiced by Judy Garland, James Dean, Burt Reynolds and Sir Laurence Olivier on our Reader App.

ElevenLabs launches Voice Isolator and Background Noise Remover - ElevenLabs offers the AI Voice Isolator, which allows users to upload audio files and effectively removes disturbances such as street noise and microphone feedback. This tool is ideal for improving audio post-production in films, podcasts, and interviews. Additionally, the company ensures compliance with GDPR and C2PA standards, emphasizing their commitment to privacy and integrity in digital content authentication.

Suno - iOs App - Make and Explore Music - Suno is a music-creation platform catering to users of varying musical abilities, eliminating the need for instruments and emphasizing creativity.

Runway Gen-3 Alpha - Text to Video is now available to everyone.

Announcing Mosaic AI Agent Framework and Agent Evaluation - Databricks unveiled Mosaic AI Agent Framework & Agent Evaluation, two new tools aimed at assisting developers in crafting and deploying quality Generative AI and Retrieval Augmented Generation (RAG) applications. Despite the ease of creating proof of concepts, developers face difficulties in achieving the level of accuracy, safety, and governance required for customer-facing applications. These challenges include selecting evaluation metrics, collecting human feedback, pinpointing and fixing quality issues, and refining the application before production release. The Agent Framework and Agent Evaluation address those challenges by enabling quick human feedback, providing a set of quality metrics, integrating feedback into quality assessments, and facilitating application tuning.

GraphRAG: New tool for complex data discovery now on GitHub - GraphRAG is a new retrieval-augmented generation framework designed for question-answering over private or unseen datasets, accessible on GitHub and complemented by an Azure-hosted API solution. It leverages a large language model to build a detailed knowledge graph from text documents, identifying thematic "communities" within the data to create hierarchical summaries useful for understanding datasets without pre-formulated queries. GraphRAG demonstrates superior performance over naive RAG approaches, particularly in comprehensiveness and diversity when answering global questions that consider whole datasets. Microsoft Research has been working to reduce the indexing costs and recently introduced methods to fine-tune LLM extraction prompts, aiming for a balanced trade-off between system complexity and response quality.

Multimodal Canvas with Gemini API - Multimodal Canvas is an experimental test console designed for developers to quickly test multimodal prompts using the Gemini API, specifically leveraging Gemini 1.5 Flash. This platform allows developers to integrate and experiment with various input modes such as drawing, camera, and images, enhancing the development and testing of multimodal applications. The tool aims to facilitate rapid prototyping and iteration, providing a versatile environment for exploring the capabilities of multimodal interactions.

Check Out My Other Videos:

Claude

Reply

or to participate.