Anon Leaks NEW Details About Q*

This is AGI

Anon Leaks NEW Details About Q*

Q* is believed by some to be an AGI (Artificial General Intelligence) that precedes OpenAI’s InstructGPT and GPT-4.

Sam Altman has confirmed Q*’s existence but provided no details. He hinted at a major breakthrough around the time Q* was first rumored. Early speculation was that Q* excelled at math and long-term planning, overcoming limitations of current language models.

A new leak suggests Q* could enhance large language models' ability in two main areas: solving complex mathematics and enabling broader, more sophisticated planning capabilities. These enhancements are seen as steps toward enabling AGI by allowing models to perform tasks and solve problems in a more human-like manner. Some believe Q* may just be a sophisticated prompting technique to get language models to reason step-by-step, rather a fundamentally new architecture.

There is uncertainty around the legitimacy of the Q* leaks and whether it truly represents an AGI breakthrough or an evolution of language model prompting techniques.

OpenAI is expected to release a 'materially better' GPT-5 for its chatbot mid-year, sources say

OpenAI, under Sam Altman, is gearing up to release the newest iteration of its generative AI model, GPT-5, with expectations set around mid-year. Despite the absence of an official release date, certain enterprise customers have been privy to demonstrations showcasing improvements in ChatGPT. The new model promises enhanced capabilities, such as the potential to autonomously perform tasks via AI agents. While GPT-5 is still in the training phase, it must undergo rigorous safety testing and "red teaming" to ensure reliability—a process without a defined completion timeline. The revenue from enterprise customers, who rely on advanced versions like ChatGPT and its updates, such as the quicker and more accurate GPT-4 and GPT-4 Turbo, is fundamental to OpenAI's growth. Yet, concerns over the quality degradation from models relying on web-scraped data persist, with OpenAI hoping GPT-5 will address reliability issues and user dissatisfaction evidenced in online forums. OpenAI also participates in the ongoing debate over legal frameworks governing large language models' access to copyrighted training data

  • Biden-Harris Administration Announces Preliminary Terms with Intel to Support Investment in U.S. Semiconductor Technology Leadership and Create Tens of Thousands of Jobs - The U.S. Department of Commerce plans to provide Intel Corporation with up to $8.5 billion under the CHIPS and Science Act as part of President Biden's Investing in America Agenda. This funding aims to bolster the U.S. semiconductor supply chain and advance domestic chip production in Arizona, New Mexico, Ohio, and Oregon. The initiative aligns with efforts to enhance the nation's economic and national security through self-reliance in semiconductor manufacturing. Intel anticipates investing over $100 billion in the U.S., creating approximately 10,000 manufacturing jobs and nearly 20,000 construction jobs over five years. The proposed funding includes a $50 million focus on workforce development. The collaboration represents one of the largest commitments to U.S. semiconductor manufacturing, potentially securing America's technological leadership while generating significant employment and fostering innovation, especially in AI and critical military capabilities.

  • Key Stable Diffusion Researchers Leave Stability AI As Company Flounders - Stability AI, an AI startup valued at $1 billion in 2022, is experiencing turmoil with key resignations, including key researchers from its flagship Stable Diffusion AI project. Originally an academic pursuit at German universities, the Stable Diffusion model became popular for its text-to-image capabilities. However, the founding team's split, amidst accusations of exaggerated company contributions and other high-profile exits, is part of a larger crisis. The company, once flush with cash from a $100 million seed round, is facing a cash crunch with high monthly expenses and significant losses, including a departure from the board by investment firm Coatue and the resignation of its board observer from Lightspeed. Stability AI recently secured a $50 million lifeline from Intel, sold off its acquisition Clipdrop to Jasper, and introduced a paid tier for its AI tools. It's also entangled in copyright infringement lawsuits, adding to the challenges for the company's future.

  • Samsung Creates Lab to Research Chips for AI’s Next Phase - Samsung Electronics Co. has established a research lab focused on designing a new type of semiconductor specifically for artificial general intelligence (AGI), a long-standing goal in AI development. The lab will initially concentrate on developing chips for large language models, with an emphasis on inference, aiming to create chip designs that offer improved performance and support for increasingly larger models at a lower power and cost. The move comes amidst discussions among Silicon Valley leaders about the potential and risks of AGI, with both OpenAI CEO Sam Altman and Meta Platforms Inc.'s Mark Zuckerberg visiting Seoul recently to discuss AI cooperation with Samsung and other Korean firms. Samsung is striving to catch up to SK Hynix Inc. in providing chips for AI, after the latter gained an early advantage in a new type of advanced memory semiconductor designed for use with Nvidia Corp. chips.

  • OpenAI's chatbot store is filling up with spam - OpenAI's CEO, Sam Altman, unveiled custom chatbots called GPTs with a wide range of capabilities, from coding help to fitness advice. Despite a review system combining automated and human assessment for GPTs listed in OpenAI's GPT Store, TechCrunch exposes a discrepancy between this system and the actual content: numerous GPTs potentially infringe copyrights, with some mimicking Disney or Marvel styles, and others suggesting they can bypass AI detection tools like Turnitin. OpenAI’s own terms prohibit building GPTs promoting academic dishonesty, yet the marketplace hosts tools that claim to rephrase content to avoid AI detectors. Additionally, some GPTs impersonate public figures or authorities, breaching OpenAI's policies against impersonation without consent. The marketplace experiences growing pains, including spam, dubious legality, and attempts to jailbreak AI models, indicating issues with OpenAI's policing of its content despite the potential financial allure akin to Apple's App Store model.

  • U.S. Sues Apple, Accusing It of Maintaining an iPhone Monopoly - The U.S. Justice Department, joined by 16 states and the District of Columbia, has filed an antitrust lawsuit against tech behemoth Apple. The 88-page lawsuit alleges that Apple has engaged in illegal practices to maintain consumer dependence on iPhones and hinder competition, particularly by limiting third-party access to certain iPhone features and prioritizing its own services and products. This lawsuit represents a significant governmental challenge to Apple's practices, which have been instrumental in its growth to one of the most valuable publicly traded companies. The legal action specifically targets Apple's control over the iPhone ecosystem, suggesting it has created a skewed competitive landscape by restricting rival access to iPhone's core functionalities.

  • Google fined €250 Million in French Clash with News Outlets - The French competition watchdog fined Google €250 million for failing to negotiate fair deals with media outlets for publishing their content and using press articles to train its AI technology without informing them. Google had previously been fined €500 million for similar abuses, and the regulator stated that Google had not respected commitments to negotiate deals in good faith. Google, however, believes the fine is disproportionate and does not adequately consider their efforts to address the concerns raised.

  • Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content - OpenAI once stated to the UK parliament that training high-performance AI without copyrighted content was "impossible." Yet recent developments challenge this narrative. A French-backed group has released a massive, public domain-only AI dataset, and the nonprofit Fairly Trained has certified the first large language model created without violating copyright laws. This certified model, KL3M by 273 Ventures, is specialized for legal and financial contexts and is based on a proprietary corpus of thoroughly vetted data. Additionally, the Common Corpus project has released a huge public domain text corpus aimed at fostering AI development without legal concerns. Despite historical reliance on broad web scraping, these initiatives indicate a shift toward ethical AI training practices that respect intellectual property rights.

Awesome Research Papers

RankPrompt: Step-by-Step Comparisons Make Language Models Better Reasoners - The abstract details a novel method called RankPrompt, designed to improve the reasoning accuracy of Large Language Models (LLMs) like ChatGPT, by enabling them to self-rank their own responses. Regular LLMs can make logical errors, and existing solutions are either resource-intensive or unreliable. RankPrompt addresses this by having the LLMs generate and compare various responses to create context for better decision-making. Tests on arithmetic and commonsense reasoning tasks revealed up to a 13% performance boost and a 74% human preference alignment in open-ended generation evaluations. This method showcases resilience to varying response consistencies and orderings.

Evolving New Foundation Models: Unleashing the Power of Automating Model Development - SakanaAI introduced a report detailing the Evolutionary Model Merge approach, which applies evolutionary techniques to combine diverse open-source models into new ones with user-specified abilities. Initial success includes state-of-the-art Japanese language models adept in mathematical reasoning and vision-language tasks, developed with less computational resources than traditional models. Their method challenges the costly paradigm of model development and demonstrates the potential of existing open-source models through a cost-effective, evolutionary design process, with potential for broader AI development and acceleration.

Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics - The abstract discusses an empirical study exploring the effectiveness of recent large language models (LLMs) like ChatGPT and GPT-4 within the financial domain, a niche that significantly affects numerous analytical tasks. The research encompasses diverse financial text analysis problems using eight benchmark datasets across five task categories. The study's findings detail both the capabilities and limitations of current models by benchmarking them against specialized, fine-tuned approaches as well as domain-specific pretrained models. The authors aim to provide insights into current models' performance in finance, possibly paving the way for future enhancements. Wharton Professor, Ethan Mollick, had an interesting comment on the paper: “This remains one of the most consequential experiments in AI: Bloomberg trained a GPT-3.5 class AI on their own financial data last year… …only to find that GPT-4 8k, without specialized finance training, beat it on almost all finance tasks. Hard to beat the frontier models.”

Awesome New Tools

InducedAI - opening early access to the first public and free autonomous web agent API. The platform empowers businesses to automate complex, browser-native workflows using plain English instructions, without relying on APIs.

GPT Prompt Engineer - an agent that creates optimal GPT prompts. The AI system will: generate many possible prompts, test them in a ranked tournament, and return the best prompt.

TacticAI: an AI assistant for football tactics - Google Deep Mind has introduced TacticAI, an AI assistant developed in collaboration with Liverpool Football Club and covered in Nature Communications. The system utilizes a geometric deep learning framework to provide tactical insights on corner kicks in soccer, assisting in predicting outcomes and forming tactical adjustments. The collaboration aims to advance sports analytics AI and explore its wider implications across various fields.

Awesome New Launches

API Support for Gemini 1.5Pro for Developers - Onboarded developers can try out Gemini 1.5 Pro with the 1M token context window or in the AI Studio UI.

Open Interpreter Launches the 01 Developer Preview - The 01 Light is a portable voice interface that controls your home computer. It can see your screen, use your apps, and learn new skills. The 01 Light is designed as a voice interface that focuses on providing users with a hands-free method of interacting with their computing devices, leveraging voice commands to facilitate a more seamless and intuitive engagement with technology.

Check Out My Other Videos:

Reply

or to participate.