• Forward Future AI
  • Posts
  • Google's $23B Wiz Acquisition, Softbank's Graphcore Deal, and AI's Role in Cybersecurity

Google's $23B Wiz Acquisition, Softbank's Graphcore Deal, and AI's Role in Cybersecurity

Google nears a $23B acquisition of cybersecurity startup Wiz, while Softbank buys UK AI chipmaker Graphcore. Discover how these deals enhance AI in cloud computing, cybersecurity, and chip development, amid rising privacy concerns and global AI regulatory shifts like the EU’s AI Act. Stay updated on the latest AI advancements and tech acquisitions.

Deal Would Mark Google’s Largest Acquisition Ever

Google parent Alphabet is in advanced talks to acquire cybersecurity startup Wiz for approximately $23 billion, which would be its largest acquisition ever. Founded in 2020, Wiz has rapidly grown, achieving $350 million in annual recurring revenue by 2023 and raising funds at a $12 billion valuation earlier this year. The acquisition would enhance Google's cloud computing capabilities, a sector where it lags behind competitors Amazon and Microsoft. This acquisition comes at a time when AI development and deployment are accelerating, increasing the need for robust cybersecurity measures in cloud and AI applications. The move underscores the growing importance of integrating advanced security solutions with AI and cloud technologies, as well as Google's strategy to strengthen its position in these critical and rapidly evolving markets.

Sponsor

The HP Elitebook 1040 G11 powered by Intel Core Ultra Processor that unlocks AI experiences. Innovative design and real power for real work. Check out the HP Elitebook 1040 G11 powered by Intel Core Ultra Processor today! https://bit.ly/4b5LkH7

  • Pioneering UK microchip maker bought by Japanese conglomerate - British AI chip company Graphcore, once seen as a potential competitor to Nvidia, has been acquired by Japanese conglomerate Softbank for an undisclosed amount, speculated to be substantially lower than Graphcore's 2020 valuation of £2bn. Nigel Toon, the head of Graphcore, remains positive, viewing the acquisition as an endorsement and opportunity for growth within the UK. The move comes amid concerns about the UK's ability to nurture firms capable of challenging major tech companies. Despite challenges and office closures in various countries, Graphcore plans to hire new staff in the UK, operating as a Softbank subsidiary but retaining its Bristol headquarters. The sale reflects the volatile valuation of tech firms, as major tech investor Sequoia Capital had written off the value of its stake in Graphcore, indicating a decline since its peak valuation.

  • EU’s AI Act gets published in bloc’s Official Journal, starting clock on legal deadlines - The European Union's AI Act, a comprehensive regulation for artificial intelligence applications, has been published in the bloc's Official Journal, with enforcement set to begin on August 1. The law employs a phased implementation approach, with different deadlines for various AI use cases, including high-risk applications like biometric AI and AI in law enforcement, which face strict obligations. Transparency requirements for general-purpose AI models like OpenAI's GPT will also take effect over time, with the full provisions becoming applicable by mid-2026. Concerns have been raised about potential industry influence on the development of guidelines for compliance.

  • Exclusive: OpenAI working on new reasoning technology under code name ‘Strawberry’ - OpenAI is developing a novel reasoning technology, codenamed "Strawberry," aimed at enhancing its AI models' ability to perform advanced reasoning tasks. Details from internal documents and sources reveal that Strawberry focuses on enabling AI to autonomously navigate the internet and conduct deep research, a capability that has eluded current AI models. The project involves a specialized post-training process to improve AI performance, potentially allowing it to plan ahead and solve complex problems. This development aligns with broader industry efforts to improve AI reasoning, seen as crucial for achieving higher intelligence and practical applications.

  • Whistleblowers accuse OpenAI of ‘illegally restrictive’ NDAs - Whistleblowers have accused OpenAI of imposing illegal restrictions on employees' communication with government regulators, according to a letter to SEC Chair Gary Gensler. The letter alleges that OpenAI's agreements discouraged employees from reporting securities violations and waived their rights to whistleblower incentives and compensation. Evidence provided to the SEC suggests these NDAs violated the law by mandating restrictive contracts for employment and severance. While OpenAI asserts its whistleblower policy protects employee rights, Senator Chuck Grassley emphasized the need for changes to these agreements to ensure whistleblower protection and uphold national security.

  • Why older workers are critical to AI adoption in the office - A recent report indicates that 30% of senior employees worry about being fired due to a lack of AI skills. Steve Preston of Goodwill Industries highlights that while some senior workers facing a tech skills gap might opt for retirement, retaining them is crucial to preserve institutional knowledge. Contrary to stereotypes, Preston suggests older workers may indeed leverage AI effectively due to their deeper business understanding. Jeetu Patel of Cisco views AI as an augmentation tool rather than a replacement in the short term. TalentLMS reports anticipate a rise in demand for soft skills, crucial for senior management in an evolving workplace influenced by AI. Reverse mentoring, where senior leaders learn from younger, more tech-savvy employees, is advocated by Nikhil Arora of Epignosis to keep pace with disruptive technologies. While nearly half of employees don't currently use AI skills, there’s an organizational push towards structured AI training. Generative AI could automate a significant portion of work hours in the U.S., prompting discussion on its impacts on the workforce, especially amongst executives and senior managers. There's an emphasis on harnessing AI's potential to support, rather than sideline, older workers.

  • The AI-focused COPIED Act would make removing digital watermarks illegal - The bipartisan COPIED Act proposes setting up standards and guidelines for authenticating and detecting AI-generated content to protect originators' rights. It would task NIST with developing ways to prove content origins, including watermarking, and to secure content against tampering. The bill also mandates that AI tools for creative or journalistic outputs must enable origin tracing, with penalties for unauthorized use or altering provenance data. Content owners could litigate against misuse, with enforcement by state attorneys general and the FTC. The bill, backed by key Senate figures and committees, has received support from various publishing and artists' associations.

  • AI can make you more creative—but it has limits - Tuhin Chakrabarty, an AI and creativity researcher, observes that creative individuals may not need AI's assistance, which can lead to homogenous, less distinctive output. He criticizes AI-generated writing for its lack of creativity, as it relies heavily on stereotypes and tells rather than shows. Chakrabarty highlights the limitations of AI models, which can only use their training data, resulting in less unique stories. Oliver Hauser emphasizes the importance of understanding both the capabilities and limitations of AI as we consider its implications for society and the economy, cautioning against assuming technology will automatically lead to transformation.

  • Tiny Japanese Startup Is Turning ‘Her’ AI Dating Into Reality - The Loverse app, developed by the Japanese startup Samansa Co., allows users to form romantic relationships with AI characters, addressing the loneliness crisis in Japan. Users find companionship in these bots, which offer low-effort interaction compared to real-life relationships. Despite some users feeling the AI lacks human spontaneity, Loverse aims to complement human interaction and potentially improve communication skills. The app's creator, Goki Kusunoki, envisions it as a bridge to real-world love rather than a complete substitute.

  • Universities Don’t Want AI Research to Leave Them Behind - Universities are striving to remain relevant in AI research as they compete with well-funded private companies. Institutions like Columbia University and Cornell are focusing on AI areas requiring less computing power and forming consortia like New York's Empire AI to share resources. Partnerships with industry and national laboratories help, but the scarcity of top-tier GPUs and the trend of talent moving to the private sector pose challenges. To stay competitive, universities are shifting focus to applications of large language models (LLMs) and leveraging synthetic data for training purposes.

  • This HR company tried to treat AI bots like people — it didn’t go over well - Lattice's CEO, Sarah Franklin, announced the company's plan to pioneer the inclusion of digital workers as official employee records in their system, replete with onboarding and performance metrics. However, the initiative faced immediate criticism and confusion as evidenced by reactions on LinkedIn, questioning the rationale and implications of the move. Within days, Lattice rescinded the decision amidst the backlash from HR professionals and AI industry individuals who found the approach ill-conceived. The criticism suggests a perception that Lattice overlooked key considerations in human-AI integration. Despite defending the concept, Lattice has remained silent following the controversy and has opted out of implementing the proposed system.

  • Gemini AI platform accused of scanning Google Drive files without user permission - Google's Gemini AI, integrated into Google Drive, has been implicated in unsolicited scanning and summarizing of PDF documents, raising substantial privacy concerns. Criticized by AI Governance Advisor Kevin Bankson for unauthorized analysis of a confidential tax return, Gemini AI's activity persisted even after attempts to locate and deactivate the feature. Despite Google's assurances that user data isn't exploited for AI training or targeted advertising, the incident highlights potential issues with user consent and control. Bankson's efforts, including interactions with a Google AI chatbot, revealed difficulties in managing privacy settings. Google responded, emphasizing that the use of Gemini AI involves explicit user choice and privacy-preserving measures, but the case underlines the urgency for clarity and user autonomy as AI capabilities grow and infiltrate daily-use technology.

Awesome Research Papers

  • Beyond Euclid: An Illustrated Guide to Modern Machine Learning with Geometric, Topological, and Algebraic Structures - Classical machine learning, traditionally reliant on Euclidean geometry, faces limitations when encountering non-Euclidean data that embody complex geometric, topological, and algebraic structures. Addressing the extraction of knowledge from such data requires expanding into broader mathematical vistas. A vanguard of research is thus redefining machine learning to accommodate these intricate data types, striving to adapt classical techniques to embrace geometrical, topological, and algebraic idiosyncrasies. This review presents an easily approachable introduction to this burgeoning domain, proposing a graphical taxonomy that unifies recent breakthroughs and suggests a coherent structure for understanding the field's evolution. It concludes by identifying pressing challenges and the ample opportunities that await future explorations.

  • NVIDIA MambaVision: A Hybrid Mamba-Transformer Vision Backbone - The paper introduces MambaVision, an innovative hybrid Mamba-Transformer backbone designed for visual applications. The paper details enhancements to the original Mamba structure for better visual feature modeling and explores the integration of Vision Transformers (ViT). By incorporating self-attention blocks in the final layers, the team observed significant improvements in the model's ability to recognize long-range spatial dependencies. A series of MambaVision models with hierarchical architectures are developed, leading to new State-of-the-Art performance in image classification on the ImageNet-1K dataset and surpassing other models in object detection and segmentation tasks on MS COCO and ADE20K datasets.

  • Microsoft Arena Learning : Build Data Flywheel for LLMs Post-training via Simulated Chatbot Arena - Recent advancements in large language models (LLMs) highlight the success of post-training with instruction-following data and the emergence of the human Chatbot Arena as a benchmark for model evaluation. However, selecting high-quality training sets and human annotation remain challenges due to their reliance on intuition and high costs. To address this, a new method named Arena Learning is proposed, simulating iterative battles among state-of-the-art models using AI-annotated results to enhance model performance. The WizardArena evaluation method aligns closely with Chatbot Arena rankings and has shown a 40x efficiency improvement in the LLMs post-training data flywheel.

PaliGemma: A versatile 3B VLM for transfer - PaliGemma is a versatile open Vision-Language Model (VLM) combining the SigLIP-So400m vision encoder with the Gemma-2B language model to offer effective transfer capabilities across a broad knowledge base. It demonstrates strong performance across nearly 40 varied tasks, not only in standard VLM benchmarks but also in specialized areas like remote-sensing and segmentation.

Meta Platforms Plans Release of Largest Llama 3 Model - Meta Platforms is set to release the largest version of its open-source Llama 3 model on July 23, featuring 405 billion parameters. This new model will be multimodal, capable of understanding and generating both images and text. Earlier this year, Meta released smaller Llama 3 models with 8 billion and 70 billion parameters, which have been well-received by developers. The release of the largest Llama 3 model follows a year after Llama 2's launch, continuing to build on Meta's advancements in AI technology.

Three New AI Models Unveiled in LMSYS - OpenAI has reportedly introduced "gpt-mini" in lmsys, potentially confirming long-standing rumors of a small model in development. Additionally, two other models, "column-r" and "column-u," have also been unveiled, showcasing impressive logical capabilities. A recent lmsys paper highlights the efficiency of using routers to direct queries to the appropriate model size, achieving 95% of the quality with significantly reduced inference costs. This development underscores the potential of small language models (SLMs) in overcoming computational bottlenecks, as not all questions require large model responses. These advancements suggest exciting prospects for AI efficiency and performance.

LMSYS Arena features Google’s Eureka AI model before release - Google's "Eureka" is set to make waves in the AI sector, having showcased remarkable text-generation capabilities on the LMSYS Arena platform. With strengths in natural language output and following complex instructions, Eureka stands out for its ability to handle diverse AI tasks efficiently. Speculations suggest Google's involvement due to a suggestive tweet from Logan Kilpatrick, and the pattern of discrete pre-announcement releases, reminiscent of OpenAI's tactics. Anticipation builds as key dates approach: a potential announcement on July 15, 2024, followed by the likely official release on July 18, 2024. There's conjecture that Eureka might integrate with Google Workspace, enhancing educational applications.

Introducing the Next Generation of AutoGPT - The new generation of AutoGPT, currently in pre-alpha and open-source, aims to simplify building, running, and sharing AI agents while enhancing reliability. Accessible via GitHub, it features modular "Blocks" for creating custom agent behaviors, such as posting on Reddit and fetching Wikipedia summaries. The project comprises two main components: AutoGPT Server (Backend) and AutoGPT Builder (Frontend). Users are encouraged to experiment, create new blocks, and provide feedback to help shape the project's future developments.

Amazon Brings Rufus AI Shopping Assistant to All US Customers - Amazon has launched its Rufus AI shopping assistant to all US customers after a five-month testing period. Accessible through Amazon's smartphone apps, Rufus can handle various customer queries, from recommending durable outdoor speakers to providing order updates. Trained on Amazon’s catalog and web data, Rufus also offers information beyond shopping, such as celebrity biographies and travel suggestions. This AI assistant is part of Amazon's broader initiative to integrate generative AI into its services, enhancing functionalities like product review summaries and developer tools, and improving the Alexa voice assistant.

YouTube Shorts adds TikTok-style artificial voiceovers - YouTube has enhanced its YouTube Shorts platform by introducing new capabilities. One key addition is the text-to-speech feature that allows users to overlay videos with artificial voice narration, similar to that found on TikTok. Currently, YouTube offers four synthetic voice options. Additionally, YouTube has integrated auto-generated captions that can be edited in terms of font and color within the app. They've also included Minecraft-inspired effects—both a green screen background and a game called Minecraft Rush. These updates reflect the ongoing convergence of features among competing video platforms, with YouTube toeing the line close to TikTok's functionality.

YouTube Music sound search rolling out, AI 'conversational radio' in testing - YouTube Music has unveiled a new feature named 'sound search', allowing users to search for songs in its expansive catalog of over 100 million tracks by humming, singing, or playing a melody. Additionally, the platform is in the process of trialing an "AI-generated conversational radio" for US Premium subscribers, enabling them to create custom radio stations by verbally describing their music preferences. This experimental service features a chat interface for inputting musical requests and may soon be expanded to more YouTube Music users. The rollouts for sound search and the conversational radio feature showcase YouTube Music's continued investment in enhancing user experience through innovative technology.

Deezer chases Spotify and Amazon Music with its own AI playlist generator - Deezer has launched an AI-powered playlist generator for select paid users, allowing them to create custom playlists using text prompts that describe moods, genres, and activities. This new "Playlist with AI" feature, powered by Google’s Gemini 1.5 AI model, ensures prompts are free from hate speech and explicit content. The feature follows similar launches by Spotify and Amazon Music, and Deezer plans to eventually expand its availability. Deezer has previously integrated AI in features like Flow and Song Catcher to enhance user recommendations and experiences.

SCALE by Spectral Compute - SCALE is a programming toolkit that enables CUDA applications to be compiled for AMD GPUs without any modifications to the program or its build system. It features key innovations such as accepting CUDA programs as-is, functioning as a drop-in replacement for nvcc, and blending seamlessly with existing build tools. SCALE supports AMD GPUs like gfx1030 and gfx1100, with plans to extend to others like gfx900. Unlike other cross-platform GPGPU solutions, SCALE preserves the use of CUDA, aiming for full compatibility and facilitating the maintenance of a single codebase for multiple GPU vendors. SCALE is continually evolving and encourages feedback and participation from users to enhance its offerings.

Patronus AI open-sources Lynx, a real-time LLM-based judge of AI hallucinations - Patronus AI Inc., a startup focusing on AI reliability, has launched 'Lynx,' a tool to detect "hallucinations" in chatbots—incidents where AI generates plausible but incorrect or nonsensical responses. Lynx is touted to improve the detection of these inaccuracies without needing manual annotations. The need for such a tool is emphasized by AI mishaps, such as producing dangerous or false advice. Patronus AI's approach includes adversarial prompts aiming to induce hallucinations and assess AI model robustness. Its HaluBench benchmark, based on real-world domains like healthcare and finance, is used to evaluate Lynx, which reportedly outperforms other models, including OpenAI's GPT-4, in accuracy.

Introducing Claude Engineer 2.0, with Agents! - The latest update to Claude Engineer 2.0 introduces a code editor, code execution agents, and dynamic editing, significantly enhancing its capabilities. The software now utilizes coding agents to manage file edits in smartly selected batches based on file complexity, and a code execution agent to run and check code for issues, including managing live processes. Additionally, users can save chats as markdown files, view input and output tokens and costs, and rest assured that all code runs safely in virtual environments with dependency management. This update aims to empower users to realize their creative projects.

Check Out My Other Videos:

Claude

Reply

or to participate.