Devin Makes a Massive Splash

The End of Programming?

Devin Makes a Massive Splash

Introducing Devin, the AI software engineer developed by Cognition Labs, represents a significant leap forward in autonomous coding technology. Outperforming existing models, Devin autonomously tackles complex engineering tasks, learns over time, and collaborates with human engineers, surpassing the SWE-bench coding benchmark by correctly resolving 13.86% of real-world GitHub issues unassisted. The comparison between Devin and other “models” is not accurate, since Devin is an agent and can accomplish tasks after multiple attempts.

However, the most impressive part of their launch was the launch itself. Garnering millions of views on all social media platforms, Devin itself wasn’t extraordinarily unique beyond having a nice interface and the ability to see the console, browser, and chat interface together in one place.

Devin can independently use developer tools within a controlled environment, contribute to production repositories, and handle end-to-end app development and deployment.

EU AI Law

The European Parliament has adopted ground-breaking legislation to regulate artificial intelligence (AI), the EU AI Act, with a vote outcome indicating strong support.

The law categorizes AI applications by risk levels and will ban those deemed "unacceptable." Thierry Breton and Roberta Metsola highlight the EU's role in setting global standards and ensuring innovation aligns with fundamental rights. While the framework is set to enter force by May, concerns linger about the implementation challenges and potential hindrances to European competitiveness against Chinese and American tech firms.

Last week's Digital Markets Act targets anti-competitive practices by tech "gatekeepers." Google's restricted features on its Gemini chatbot exemplify proactive responses to disinformation risks. The AI Act aims to empower human control of AI and foster societal benefits. Yet, legal experts warn of the pace of AI evolution potentially outdating the law, stressing the need for its agile evolution post-implementation.

  • OpenAI CTO Mira Murati doesn't know what data Sora was trained on - Mira Murati, the CTO of OpenAI, wasn’t sure regarding the specific data used to train OpenAI's new video model Sora, stating only that it comprises public and licensed data. This ambiguity arises amid ongoing lawsuits alleging OpenAI's improper use of copyrighted material, with the company asserting fair use. Murati acknowledges the high costs of video generation AI, anticipating Sora's expenses to align with DALL-E 3 upon release. The timeline for Sora's launch remains tentative, possibly later in the year, with safety guidelines expected to mirror DALL-E 3’s, including restrictions on depicting public figures.

  • Apple Buys Canadian AI Startup Darwin AI - Apple has acquired DarwinAI, a Canadian AI startup focusing on technologies for visual component inspection and optimization of AI systems. The acquisition, which brought numerous DarwinAI employees into Apple's AI team, has been viewed as a move to improve manufacturing efficiency within Apple's supply chain. Despite this strategic growth, Apple is perceived as needing to accelerate its efforts in generative AI to keep pace with competitors like OpenAI, Google, and Microsoft. CEO Tim Cook has hinted at upcoming AI initiatives, with internal generative AI developments and testing for future iOS updates hinting at further AI integration in Apple's products and services.

  • OpenAI Says Sora Will Launch in 2024 and Nude Videos Aren’t Off the Table - OpenAI's CTO Mira Murati revealed that Sora, the company's advanced AI video generator, is set to launch in 2024. Discussions are ongoing about whether Sora will permit the creation of nude videos, indicating a possibility for artistic flexibility. The implications are significant as AI has already been misused for deepfake pornography, raising ethical concerns. However, the potential to transform the $97 billion porn industry, potentially reducing sex trafficking and abuse, keeps OpenAI from dismissing the idea. Sora, which lacks sound capability in its initial version, was trained using Shutterstock images and possibly public social media content. Early demonstrations hint at its proficiency in creating highly realistic videos, showcasing significant advancements in AI-generated imagery.

Awesome Research Papers

VLOGGER is an innovative audio-driven video generation method that turns a single image of a person into high-quality, controllable videos without person-specific training. It leverages a two-part diffusion model approach, allowing for the creation of videos with varying lengths while maintaining accurate representations of human movement and expression. Unlike prior models, VLOGGER handles full-body generation without the need for face cropping and supports a wider range of scenarios and identities. It is trained on the extensive MENTOR dataset, featuring 3D pose and expression data across 800,000 identities. Demonstrating superior performance over existing methods, VLOGGER excels in image fidelity, identity conservation, and smooth temporal transitions, also providing diversity in generated content. Moreover, it has practical applications in video editing and customization, showcasing its potential for fair and large-scale model training.Cool Projects

The Data Interpreter is a new solution for enhancing problem-solving in data science tasks, integrating three advanced techniques: dynamic planning with hierarchical graph structures for adaptive real-time data handling, dynamic tool integration to improve code proficiency, and logical inconsistency identification with efficiency boosts from experience recording. This system outperformed open-source baselines, achieving higher scores in machine learning tasks and significant gains in the MATH dataset and open-ended tasks.

Google presents new research on a Scalable Instructable Multiworld Agent (SIMA) that can follow natural-language instructions to carry out tasks in a variety of video game settings.

  • Claude 3 Haiku - Claude 3 Haiku is the latest AI model released, boasting unparalleled speed and affordability for its intelligence class. Designed specifically for enterprise use, it processes data at a rate of 21K tokens per second, facilitating rapid analysis of large datasets for customer support and other time-sensitive tasks. Cost-effective, at half the price of similar-tier models, Haiku supports a 1:5 input-to-output token ratio, delivering economical analysis of extensive texts and images. Security is a top priority, with comprehensive measures such as continuous monitoring, secure coding, robust encryption, and regular penetration testing to safeguard against threats. Accessible via Claude API or Claude Pro subscription, Haiku's integration with Amazon Bedrock and Google Cloud Vertex AI is forthcoming.

  • Cerebras Chip - Cerebras Systems has launched the Wafer Scale Engine 3 (WSE-3), a 5nm AI chip with 4 trillion transistors and 900,000 AI cores, delivering 125 petaflops of AI performance, capable of training AI models up to 24 trillion parameters. The WSE-3 powers the CS-3 AI supercomputer, boasting up to 1.2 petabytes of memory, and can be clustered for a total of 256 exaFLOPS across 2048 nodes. The CS-3 simplifies large AI model training, requiring significantly less code compared to GPUs, maintaining constant power efficiency while doubling performance, and is compatible with PyTorch 2.0 and advanced AI techniques. Customers and partners, including Argonne National Laboratory and G42, recognize CS-3's advancements, contributing to sizable order backlogs and ongoing construction of large-scale AI supercomputers like Condor Galaxy 3.

  • Truffle-1 - Truffle-1 is a low-power, 60-watt AI inference engine tailored for home server use, offering local personal AI capabilities. It supports BLE, WiFi, and USB-C connectivity, and operates efficiently with various models, showcasing it can run Mixtral 8x7B at over 20 tokens per second and Mistral at over 50 tokens per second. Truffle-1 is compatible with a dozen different architectures and can handle models with up to 100 billion parameters, with an expanding range of support on the horizon.

Check Out My Other Videos:

Reply

or to participate.