Apple Intelligence Is Different Than Other AI

How Apple Did AI The "Apple Way"

Apple’s AI Is Better Than We Thought

Apple announced “Apple Intelligence” at WWDC 2024, introducing advanced AI capabilities to iPhones, Macs, and other devices. Key features include a more conversational Siri, the creation of "Genmoji" through AI, and integration with OpenAI's GPT-4o technology. Siri's enhanced functionality will extend to app interactions, scheduled messaging, and sophisticated on-device processing for user requests. Privacy remains central, with promises of on-device operations, a "Private Cloud Compute" system using Apple Silicon servers, and a commitment to prohibiting data storage and accessibility on Apple servers. Apple is also incorporating AI-driven writing assistance and image generation tools across its ecosystem. Additionally, the integration of ChatGPT allows Siri to seek help from the chatbot when needed, with OpenAI preserving user anonymity and not storing requests. These features are set to launch in the fall with iOS 18 and will gradually support various AI models, while OpenAI's partnership includes potential for paid feature use without the necessity for an external account.

  • Apple is putting ChatGPT in Siri for free later this year - At the WWDC 2024 event, Apple revealed its collaboration with OpenAI to integrate ChatGPT into Siri, ensuring a significant update for iOS 18 and macOS Sequoia users. This integration will enhance the native writing tools across Apple systems, while also offering opportunities for ChatGPT paid subscribers to utilize advanced features. Remarkably, user queries will remain unlogged, maintaining privacy. OpenAI's CEO Sam Altman highlighted the shared values of safety and innovation with Apple. The backbone of this partnership is OpenAI's newly launched GPT-4o model, set to power Siri's capabilities, which aligns with the aim to democratize advanced AI technology access.

  • Buzzy AI Search Engine Perplexity Is Directly Ripping Off Content From News Outlets - The startup Perplexity is under scrutiny for its content curation feature, Perplexity Pages, which appears to closely replicate journalistic work from major publications without proper attribution. Critics and evidence suggest that the platform has been reposting content originally from Forbes, CNBC, and Bloomberg, with minimal and obscure citations, and even using modified graphics from the original publishers. Despite its CEO Aravind Srinivas's statement that Perplexity prioritizes proper source citation, there is concern over ethical practices and respect for intellectual property. The AI search engine itself acknowledges the unethical nature of such practices. Meanwhile, the company, having secured substantial venture capital, continues to grow, promoting user-generated content sharing, amidst promises from Srinivas to improve sourcing visibility on their platform.

  • Hugging Face and Pollen Robotics show off first project: an open source robot that does chores - Hugging Face, an AI company, has launched "Le Robot," an open-source robotics program with its first humanoid robot, Reachy2, developed in collaboration with Pollen Robotics. Reachy2 was trained using a novel method where a tele-operated system instructed the robot in household tasks, with the learning process scrutinized by a machine learning algorithm across 50 episodes, enabling task mastery in thousands of steps. Hugging Face has made this training dataset and methodology openly available, signaling a push towards accessible advanced robotics AI. Pollen Robotics, partnering with Hugging Side, has a history of building open-source robots like the Reachy series, emphasizing affordability, ethical practices, and real-world applications, eschewing military funding, and supporting environmentally focused open-source projects. Reachy2 promises enhanced capabilities with bio-inspired 7-DoF arms. VentureBeat seeks further details on the new robot and the alliance between the two companies.

  • Silicon Valley in uproar over Californian AI safety bill - Silicon Valley is protesting a California bill mandating AI companies to implement a "kill switch" for powerful AI models and ensure they do not develop models with hazardous capabilities. Critics argue the bill stifles innovation, imposes excessive compliance costs, and could drive AI start-ups out of the state. Supporters, including the bill’s co-sponsor, the Center for AI Safety, emphasize the need for basic safety evaluations to mitigate significant risks. Amendments to the bill aim to clarify its scope and reduce the impact on open-source models and smaller start-ups.

  • Microsoft Will Switch Off Recall by Default After Security Backlash - Microsoft has scaled back the launch of its AI feature, Recall, in Windows amid security concerns raised by the privacy community. Originally a default setting, it now requires opt-in and captures frequent screenshots for AI analysis, leading to fears of it being potential spyware. Experts criticized the vulnerability this posed; any brief system breach could reveal a user's sensitive data. Microsoft responded by introducing encryption, tighter access controls through Microsoft Hello, and making Recall opt-in. Despite changes, critics remain wary, citing privacy risks and potential legal implications for users. The move follows other recent Microsoft security issues and aligns with CEO Satya Nadella's promise to prioritize security in company operations.

  • U.S. to open broad antitrust probe into AI giants - U.S. regulators are initiating antitrust investigations into Microsoft, OpenAI, and Nvidia to address concerns over their dominance in the AI sector. The Justice Department will examine Nvidia's practices, while the FTC will investigate Microsoft's investments in AI, including its substantial funding of OpenAI. This probe reflects heightened regulatory scrutiny on AI technologies, similar to past antitrust cases against tech giants like Google and Amazon. The outcome of these investigations could significantly impact the involved companies amid their recent stock market gains.

  • AI Tools Are Secretly Training on Real Images of Children - Human Rights Watch has reported that an AI training dataset, LAION-5B, has improperly included over 170 images of Brazilian children along with personal details. This data, taken from personal online content like mommy blogs and YouTube videos, has been utilized to train AI without consent, sparking privacy concerns. LAION-5B, sourced from Common Crawl, has removed identified offensive content in response to prior reports of illegal material, such as child sexual abuse imagery. The implications of misusing photos are severe, with fears of generating sensitive information or explicit content. The issue highlights the ongoing debate on internet privacy rights and regulations as AI technology advances.

  • The AI Arms Race to Combat Fake Images Is Even—For Now - A study by Italian researchers looks at the efficiency of AI models in distinguishing real images from AI-generated fakes. The study reveals that while current methods are effective, there's an ongoing technological duel, as improvement in AI image generators necessitates better detection tools. The study employed 13 AI models to identify fake images and their origins, demonstrating high accuracy rates but facing challenges with novel, unseen artifacts. Verdoliva underscores the importance of human discretion in confronting AI-generated misinformation, suggesting a combined defense of diverse models and critical evaluation of sources.

Awesome Research Papers

  • OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning - OmniH2O is an advanced system designed for whole-body humanoid teleoperation and autonomy. It serves as a universal control interface through kinematic pose, facilitating multiple control modes such as VR headset, voice, and visual input. The system uniquely supports humanoid independence by leveraging teleoperation data and integrating advanced AI models like GPT-4. OmniH2O excels in diverse real-world tasks, which range from sports to object manipulation. Additionally, it employs a reinforcement learning (RL) strategy that transfers simulated training to real-world applications, focusing on large-scale human motion data retargeting, policy learning under limited sensory information, and rewards that ensure policy robustness. The team also introduces the OmniH2O-6 dataset, enabling the study of humanoid skills acquired from teleoperated task demonstrations.

  • What is Interpretability? - AI interpretability is the study of understanding AI models from the inside out, likened to examining a novel organism. Mechanistic interpretability focuses on deciphering small units within neural networks to understand larger mechanisms, addressing the complex challenge of dense, overlapping circuits. Neural networks are compared to evolved organisms, growing complex circuits during training. Understanding these internal workings is vital for AI safety and reliability, distinguishing genuine learning from mere surface success, and the field is seen as being in an exciting phase with significant discoveries on the horizon.

  • Claude’s Character - The developers of Claude 3, an AI language model, have introduced "character training" in addition to standard alignment fine-tuning. This process aspires to endow the AI with nuanced and rich personality traits such as curiosity, open-mindedness, and thoughtfulness, aiming to create a well-rounded, adaptive, and ethical AI. A balanced approach is pursued, avoiding extremes such as pandering, adopting rigid views, or feigning neutrality. Instead, the AI is trained to acknowledge its dispositions and engage openly with various perspectives. The character traits are integrated through a synthetic, self-generated data process without direct human interaction. This character development is a work in progress, with implications for personalization and ethical considerations in AI development. The effort aims not only to make AI interactions more engaging but aligns with larger goals of ensuring AI behavior is both ethical and aligned with a wide spectrum of human values.

  • Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models - The paper demonstrates a significant reasoning failure in state-of-the-art large language models (LLMs) on simple, common sense tasks. Despite high performance on standardized benchmarks, these models exhibit overconfidence in incorrect answers and provide nonsensical reasoning to justify their errors. Standard interventions like enhanced prompting and multi-step reevaluation fail to correct these issues. The authors call for a reassessment of LLM capabilities and the development of new benchmarks to better detect fundamental reasoning deficits.

  • Mixture-of-Agents Enhances Large Language Model Capabilities - This abstract outlines a novel approach for optimizing the performance of large language models (LLMs). It introduces a Mixture-of-Agents (MoA) method that synergizes multiple LLMs in a layered architecture. Each 'agent' LLM in a given layer utilizes the output from the 'agents' in the previous layer to enhance its own response. This technique has led to MoA models overtaking the performance of GPT-4 Omni in various evaluations, notably leading AlpacaEval 2.0 with a significant margin—scoring 65.1% over GPT-4 Omni's 57.5%, even when using only open-source LLMs.

Hello Qwen2 - Introducing Qwen2, a next-generation AI language model series evolving from the previous Qwen1.5 version. Qwen2 boasts five varying sizes of models with pretrained and instruction-tuned variants, enhanced to support 27 additional languages. It offers top-tier performance benchmarks and advanced coding and mathematical skills, with a significant increase in context length handling up to 128K tokens for certain models. All Qwen2 models have Group Query Attention for efficient inference, and their multilingual prowess is confirmed by reduced issues with code-switching. Performance evaluations reveal Qwen2-72B to be superior to its predecessor (Qwen1.5) and competitive models in different domains. Furthermore, Qwen models prioritize safety against harmful content in multiple languages. The models have been released on platforms like Hugging Face and ModelScope with accessibility to third-party support for a variety of tasks.

Awesome New Launches

Windows Is Getting Its Own AI Upscaler That Promises Better Frames in Any Game on Copilot+ - Microsoft introduces Automatic Super Resolution (Auto SR), an AI-driven upscaling technology, to Windows 11 for its new Copilot+ PCs, promising enhanced gaming performance on machines with Qualcomm's ARM-based CPUs. Aimed at delivering higher resolutions and framerates, Auto SR can improve the visual quality of games like Borderlands 3 without the need for advanced Nvidia or AMD GPUs. The technology is designed to function automatically, altering desktop resolution before upscaling in full-screen or borderless-window modes. However, compatibility limitations exist due to its focus on ARM architecture, potentially introducing latency and restricting the range of playable games. Microsoft's Auto SR also ties into the DirectSR API to allow for a range of AI upscale options, though its performance against established technologies like Nvidia's DLSS remains to be seen.

Flash Diffusion - Introducing a training method known as Flash Diffusion, which is designed to efficiently train a 'student' model to mimic a 'teacher' model's complex predictions on corrupted input data through a single-step denoising process. The approach utilizes a dynamic time-stepping mechanism and incorporates an adversarial training element where a discriminator is used to ensure that the student's generated outputs closely resemble real data. Additionally, the method employs a Distribution Matching Distillation Loss to make sure the student's outputs align with the teacher's learned data distribution, optimizing the quality of the generated samples.

In

Reply

or to participate.