• Forward Future AI
  • Posts
  • Meta's 45B Parameter AI Release, Nvidia's China Market Chip, and OpenAI's Financial Struggles

Meta's 45B Parameter AI Release, Nvidia's China Market Chip, and OpenAI's Financial Struggles

Meta democratizes AI with LLaMA 3.1's 45B parameter open-source model, challenging closed systems. Nvidia prepares its flagship AI chip for the Chinese market, and OpenAI faces a potential $5B loss in 2024 amid rising costs. Explore these developments shaping the AI landscape and future tech innovations.

Meta challenges closed AI models with 45B parameter release, betting on democratization and ecosystem growth to reshape the AI landscape

Meta recently unveiled LLaMA 3.1, a 45 billion parameter open-source AI model that rivals top closed-source alternatives. In an interview, Mark Zuckerberg emphasized Meta's strategy of democratizing AI through open-source access, likening it to Microsoft's "scorched earth" approach and comparing its potential impact to Linux in the software industry. This release allows for extensive customization, fostering innovation and reducing reliance on competitors. Zuckerberg highlighted the importance of balancing openness with security, ethical considerations, and the potential for diverse, tailored AI applications. By promoting community scrutiny and an inclusive AI ecosystem, Meta aims to drive widespread innovation and deliver significant societal and economic benefits in the future.

  • Opinion Who will control the future of AI? - Sam Altman, CEO of OpenAI, highlights the pivotal moment humanity faces in deciding the direction of artificial intelligence (AI) development. He emphasizes the strategic choice between a democratic or authoritarian future, driven by the control of AI. Pointing to the risks of authoritarian governments' influence on AI, he outlines a U.S.-led strategy to maintain democratic leadership in AI. This includes robust security measures, expansive infrastructure development, coherent commercial diplomacy policies, and establishing norms for AI development and deployment. Altman suggests collaborative models, like an international AI safety agency, and stresses the importance of the democratic vision in guiding the future of AI to benefit society at

  • Open Source AI Is the Path Forward - High-performance computing evolved from closed Unix systems to open source Linux, revolutionizing modern cloud and mobile OS tech. This shift parallels the rise of Llama, an open source AI model, with Llama 3.1 405B challenging top closed AI models through openness, cost efficiency, and easy fine-tuning. Supported by major companies like Amazon and Nvidia, Llama fosters a vibrant developer ecosystem, with Meta advocating open source AI for its strategic business benefits and global security advantages. Mark Zuckerberg emphasizes that open source AI's transparency and collective oversight enhance safety and stimulate decentralized innovation and economic growth amidst geopolitical challenges.

  • What’s New Across Our AI Experiences - Meta AI is now available in 22 countries, supporting languages such as French, German, Hindi, Italian, Portuguese, and Spanish, and integrated across WhatsApp, Instagram, Messenger, and Facebook. New features include "Imagine me" prompts for personalized image generation, currently in US beta, and creative editing tools for image modifications. The advanced Llama 405B model enhances answers for complex queries in math and coding. Additionally, Meta AI will soon offer hands-free control and real-time updates on Meta Quest, enhancing interaction with physical environments.

  • The Backlash Against AI Scraping Is Real and Measurable - In the past year, there has been a notable surge in websites implementing measures to restrict access to OpenAI and similar AI scraper bots. This growing trend indicates a proactive effort by website administrators to address concerns about automated data extraction and the implications for user privacy, content integrity, and intellectual property rights.

  • How to access Chinese LLM chatbots across the world - Qwen, an AI model from Alibaba, has reached the top spot on Hugging Face's Open LLM Leaderboard, surpassing models from other tech giants. Tencent introduced Hunyuan-DiT on the same platform for text-based image generation. Additionally, Yi (01.AI) from Kai-Fu Lee showcased a chatbot and image analysis AI but currently faces downtime issues. ModelScope, Alibaba's answer to Hugging Face, fosters a space for the Chinese AI community and provides access to local and international users to Chinese LLMs, including Baichuan's models. Open Compass's LLM Arena on ModelScope enables the head-to-head comparison of AI models' text responses but lacks support for photo generation or analysis, despite some models having multimodal capabilities. The Arena also features prominent Chinese AIs along with some Western models for comparison.

  • Google Is the Only Search Engine That Works on Reddit Now Thanks to AI Deal - DuckDuckGo, Bing, Mojeek, and similar search engines have recently experienced a limitation in displaying full search results from Reddit. This change affects the visibility and accessibility of content from the social media platform through these alternative search providers.

  • The AI job interviewer will see you now - AI interviews are rising in prevalence, especially for high-volume roles, with platforms like OpenAI enabling more dynamic and responsive communication. A U.S. survey found 10% of companies already utilize AI in hiring, a number set to grow. The approach, seeking to reduce human bias and enhance efficiency, is spreading in China and India among various corporations. Despite benefits, such as up to 80% time-savings and diminished prejudice, concerns about underlying algorithmic biases persist. Experts emphasize the necessity for transparency and thorough bias mitigation strategies, while some candidates seek to exploit AI's capabilities to their advantage during interviews.

  • Errol Morris on whether you should be afraid of generative AI in documentaries - Acclaimed documentarian Errol Morris reflects on the nuances of representing reality in film amid debates over the ethics of reenactments and generative AI in documentaries. Morris, known for his groundbreaking work on "The Thin Blue Line," which challenged cinematic norms and helped exonerate Randall Dale Adams, remains skeptical about the ability of rigid filming standards to convey truth. Recently, discussions have surfaced around AI's role in potentially deceiving audiences, as seen in controversies like the Netflix series "What Jennifer Did." Morris comments on the complexity and responsibility of filmmakers to discern truth amidst falsehood in their storytelling, while expressing a preference for traditional methods over AI to maintain a focus on the examination of reality. Despite the changing landscape, Morris upholds the necessity to stay critical in distinguishing film from reality and to aim for an authentic connection to the real world through cinematic expression.

  • The love letter generator that foretold ChatGPT - In the early 1950s, Alan Turing and Christopher Strachey, two pioneering gay computer scientists from the University of Manchester, developed one of the world's first examples of computer-generated writing—computer-crafted love letters. The Manchester University Computer (MUC) was utilized to create these gender-neutral letters, which formed a template-based program with randomized elements. The work was groundbreaking in exploring the capability of machine intelligence to generate original content and highlighted the potential of artificial intelligence, years before it became mainstream. This innovation also served as a discreet outlet for expressing queer desire during a time when homosexuality was criminalized in England. Turing and Strachey exchanged ideas and collaborated on various AI experiments, including the programming of the Mark 1 computer to create music and play games. Their work was not only a technological triumph but also a personal act of defiance against the oppressive laws of the time.

  • The Uneven Distribution of AI’s Environmental Impacts - The environmental cost of AI development, particularly for large language models (LLMs), includes significant electricity use and substantial carbon emissions, as well as the depletion of freshwater resources due to data center cooling processes. As AI's prevalence grows, these effects are expected to increase, exacerbating environmental inequalities across different regions. However, strategies such as the distribution of AI computing across diverse data centers present opportunities to mitigate these impacts and promote environmental justice. Amidst AI's potential to address global challenges, the escalating resource demands of complex neural networks are drawing attention to the sustainability of AI's expansion.

  • Bing’s AI redesign shoves the usual list of search results to the side - Microsoft is revamping Bing's search experience, integrating AI-generated answers prominently while relegating classic search results to a smaller column on the right. The update, in limited release, features comprehensive AI summaries and additional content like videos and charts relevant to queries. For example, a search for "What is a spaghetti western?" pulls up an informative blurb with bullet points and media. Critics express concerns about truncating traditional search snippets and the potential for inaccuracies, referencing Google's similar AI Overviews having faced those issues. Microsoft claims this approach better satisfies user intent and maintains website traffic while listing sources for transparency.

  • GE HealthCare taps Amazon Web Services to build generative AI for medical use - GE HealthCare has partnered with Amazon Web Services (AWS) to advance the use of generative artificial intelligence in analyzing abundant medical data, which often remains underutilized due to disparate storage formats and systems. Leveraging AWS's technical infrastructure, such as Amazon Bedrock and Amazon SageMaker, GE HealthCare aims to create AI models to improve efficacy in medical screenings, diagnoses, and operational workflows. This collaboration is expected to accelerate web-based medical imaging app development, enhancing data accessibility for medical professionals. The initiative also includes exploring internal productivity enhancements using tools like Amazon Q Developer for code generation. While initial access to these AI solutions will be exclusive to GE HealthCare staff and clients, there are plans for broader future distribution, with a commitment to rigorous testing and privacy standards, ensuring no training on customer data.

  • ‘Tandem drift’ team achieves autonomous milestone - The video demonstrates an innovative project undertaken by Chris Gerdes, a professor emeritus of mechanical engineering at Stanford and an expert on autonomous vehicles, in collaboration with the Toyota Research Institute. Gerdes and his team have created the world’s first autonomous Tandem Drift team, a complex driving maneuver where two cars drift in sync, normally requiring intense driver skill. In this scenario, both the lead and the chase cars operate without drivers, utilizing artificial intelligence to execute and follow an intricate set of movements with precision. The cars communicate via Wi-Fi and employ GPS for navigation, achieving distances as close as 10 inches apart while reaching speeds of up to 35 mph. The goal of this experiment is to leverage the insights gained from the extreme control required for tandem drifting to enhance the safety and performance of autonomous vehicles on public roads.

  • Google is going all-in on self-driving vehicles - Alphabet is substantially investing in Waymo, its self-driving car unit, allocating up to $5 billion for continued development and expansion. Waymo has experienced significant achievements, providing over 2 million trips and accumulating over 20 million autonomous miles on public roadways. Recent expansions include testing fully autonomous vehicles in parts of California and offering driverless taxi services in Los Angeles, Austin, and Phoenix. Despite their progress, Waymo has faced regulatory scrutiny and local opposition in California, with investigations into numerous incidents involving Waymo vehicles, including crashes and potential traffic violations. Co-CEOs of Waymo have expressed gratitude for the investment and confidence in their progress and technology.

  • Colin Kaepernick Launches New AI Startup - Colin Kaepernick is launching Lumi Story AI, an artificial intelligence company aimed at helping aspiring creators overcome industry barriers. Backed by Alexis Ohanian's venture capital firm Seven Seven Six, Lumi provides tools like AI-generated graphics and text, addressing creators' skill gaps and high production costs. Launched in beta for comic books, Lumi plans to expand to other creative domains. Kaepernick emphasizes that the platform empowers creators by making AI an ally rather than a threat, reflecting his broader commitment to democratizing storytelling and supporting creative independence.

  • FTC to Examine if Companies Raise Prices Using Consumer Surveillance - The Federal Trade Commission (FTC) is investigating how companies may use AI and consumer data to personalize and vary prices. Targeting firms like Mastercard and JPMorgan Chase, the FTC's study seeks to understand the opaque algorithms driving these pricing strategies. Chair Lina Khan emphasizes the need for transparency in how consumer data is used for pricing. The investigation highlights concerns over "surveillance pricing" and aims to uncover potential privacy and competition risks posed by these practices, echoing past FTC studies on similar market dynamics.

  • Condé Nast Sends Cease-and-Desist to Perplexity AI Over Data Scraping - Condé Nast has issued a cease-and-desist letter to Perplexity AI, accusing the startup of using content from its publications, including The New Yorker, Vogue, and Wired, without permission. This action follows similar accusations from Forbes and an investigation by Amazon over alleged data-scraping violations. Condé Nast CEO Roger Lynch has been vocal about the threat AI poses to media companies, highlighting the potential for significant business disruptions before legal resolutions. These issues reflect a broader trend of companies accusing AI firms of unauthorized data usage for training models.

  • Senators Demand OpenAI Detail Efforts to Make Its AI Safe - Following a Washington Post report, five U.S. senators, led by Brian Schatz, have demanded that OpenAI provide information on its safety measures for AI development. Concerns were raised about rushing safety tests for GPT-4 Omni and potentially penalizing employees who reported risks. The senators seek details on OpenAI’s commitments to safe AI, including dedicating resources to AI safety research and allowing independent safety assessments. The inquiry highlights fears that AI firms prioritize profit over safety and calls for stronger oversight to ensure AI technologies do not cause harm.

  • Nvidia Clears Samsung's HBM3 Chips for Use in China-Market Processor - Nvidia has approved Samsung Electronics' fourth-generation HBM3 chips for use in its H20 graphics processing unit (GPU), designed for the Chinese market to comply with U.S. export controls. However, Samsung's HBM3 chips will not yet be used in Nvidia's other AI processors, and further tests are needed for Samsung's HBM3E chips. The approval comes amid high demand for GPUs driven by the AI boom, with Nvidia seeking to diversify its supplier base. Samsung's HBM3 chips may be supplied for the H20 processor as early as August, despite initial struggles with heat and power consumption issues.

  • OpenAI Faces Potential $5 Billion Loss in 2024, Risk of Running Out of Cash - OpenAI could lose up to $5 billion in 2024 due to high expenses, including $7 billion on AI training and $1.5 billion on staffing, far surpassing rivals like Anthropic. This financial strain might necessitate another funding round within 12 months. Despite raising over $11 billion in seven funding rounds, OpenAI's high burn rate poses a risk. The company, which launched ChatGPT in 2022, recently introduced the GPT-4o Mini and is developing a new model named "Strawberry." Regulatory challenges and safety concerns have also surfaced, with U.S. lawmakers questioning OpenAI's transparency and employment practices.

  • Elon Musk Sets 2026 Optimus Sale Date. Here’s Where Other Humanoid Robots Stand - Tesla CEO Elon Musk has announced that the Optimus humanoid robot will be available for sale in 2026, with low production starting in 2025 for Tesla's internal use. This announcement follows significant investments in the project, reflecting Musk's vision of high demand for humanoid robots. Other companies in the field include 1X, Agility Robotics, Apptronik, Boston Dynamics, Figure, and Sanctuary AI, each advancing in different aspects of humanoid robotics. These companies are at various stages of development and commercialization, with some already engaging in pilot projects across diverse industries.

  • Microsoft’s AI Assistants Will Revolutionize the Office — One Day - Microsoft is betting on AI assistants, known as Copilots, to transform workplaces, automating tasks and generating text and images. However, early adopters note significant challenges in deployment, including the need to clean up corporate data and extensive employee training. Copilots excel at distilling information but struggle with contextual understanding and handling multiple apps. Despite these issues, companies like Ernst & Young and Lumen Technologies are investing in the technology. While widespread adoption may take time, analysts believe Copilot will eventually bring significant recurring revenue for Microsoft.

  • Meta’s Oversight Board Spotlights the Social Network’s Problem with Explicit AI Deepfakes - Meta's Oversight Board criticized the company's inadequate response to AI-generated explicit images of female public figures, urging policy updates for clearer guidelines. The Board found Meta's removal of such content slow and inconsistent, highlighting the need for more explicit rules to address non-consensual AI-manipulated images. Recommendations include modifying language in policies and ensuring rapid review processes to mitigate harm. The Board emphasized the severe consequences of deepfakes, particularly in conservative societies.

  • Amazon Racing to Develop AI Chips Cheaper, Faster Than Nvidia's, Executives Say - Amazon is intensifying efforts to develop its own AI processors to reduce dependency on Nvidia's costly chips. Inside its Austin, Texas lab, engineers are testing a new server design equipped with these homegrown AI chips. Amazon's goal is to offer more cost-effective computing solutions for complex calculations and data processing through its cloud business, Amazon Web Services (AWS). Competing with Microsoft and Alphabet, Amazon aims to meet growing customer demand for cheaper AI computing alternatives.

Awesome Research Papers

  • Stable Audio Open - Stable Audio Open is a text-to-audio synthesis model comprising an autoencoder for waveform compression, T5-based text embedding for text conditioning, and a transformer-based diffusion model for audio generation in latent space. It creates high-quality, variable-length stereo audio up to 47 seconds at 44.1kHz. This open variant, differing from Stable Audio 2.0 primarily through dataset and text conditioning techniques, was trained on nearly half a million Creative Commons recordings from Freesound and the Free Music Archive. The model supports a range of sound design applications such as sound effects and Foley for media and entertainment and can be fine-tuned to specific project needs with local training on A6000 GPUs.

  • The Llama 3 Herd of Models - The paper introduces Llama 3, a new series of advanced AI foundation models with capabilities in multi-language support, coding, reasoning, and tool usage. Its most expansive variant boasts 405 billion parameters and can process up to 128,000 tokens, ranking on par with established models like GPT-4 across numerous tasks. Llama 3, both pre-trained and post-trained, has been made publicly available along with Llama Guard 3, enhancing safety for input/output interactions. Additionally, the paper details experimental integration of image, video, and speech processing, yielding competitive results, though these multimodal models are still under refinement and not broadly released.

  • ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities - ChatQA 2 is a model built on Llama3, designed for exceptional long-context understanding and retrieval-augmented generation (RAG), matching the needs unserved by both open-access and leading proprietary large language models (LLMs) such as GPT-4-Turbo. Through a detailed training process, ChatQA 2's context window was expanded from 8K to 128K tokens, and its instruction-following, RAG, and long-context comprehension were enhanced. This model rivals the accuracy of GPT-4-Turbo on various tasks and even exceeds it on the RAG benchmark, thanks to an advanced long-context retriever that mitigates context fragmentation. Extensive comparisons are made between RAG and long-context solutions across top LLMs.

  • Scalify: scale propagation for efficient low-precision LLM training - The abstract discusses the introduction of low-precision formats like float8 in machine learning hardware, intended to boost computational efficiency during the training and inference of large language models. However, their adoption has been hindered due to the complex techniques needed to achieve accuracy comparable to higher precision training. The work introduces "Scalify," a scale propagation method for computational graphs that extends and formalizes current tensor scaling techniques. Scalify enables direct float8 matrix multiplication and gradient representation, and supports float16 optimizer state storage.

  • The Prompt Report: A Systematic Survey of Prompting Techniques - This paper provides a structured understanding of prompts in Generative AI. It establishes a taxonomy of prompting techniques, including 58 text-only and 40 for other modalities. The study offers a comprehensive vocabulary of 33 terms and conducts a meta-analysis of natural language prefix-prompting literature. The work aims to clarify terminology and improve the ontological understanding of prompts in AI development and usage.

Introducing Llama 3.1: Our most capable models to date - Meta's Llama 3.1 405B model, the largest open-source AI model, rivals top closed-source models with its upgraded architecture trained on 15 trillion tokens, optimized for stability and scalability. It boasts a 128K context length and supports eight languages, enhancing capabilities in knowledge, translation, and more. Alongside updates to the 8B and 70B models, Meta provides tools like Llama Guard 3 and Prompt Guard for safer development, emphasizing open-source principles by making model weights downloadable and customizable. With over 25 partners, Meta's platform is primed for developer engagement and innovation in AI applications.

Large Enough - The Mistral Large 2 is an advanced AI model offering enhanced cost efficiency, faster performance, and supports a variety of languages and coding languages. It boasts 123 billion parameters and is optimized for long-context, single-node inference, achieving an 84.0% accuracy on the MMLU benchmark. Significant improvements include better reasoning, reduced "hallucination" in outputs, and concise, accurate instruction-following abilities. Additionally, it exhibits proficiency across multiple languages and demonstrates strong performance in code generation and problem-solving benchmarks. The model, under the Mistral Research License, is integrated with cloud services like Google Cloud Platform and is available for research and non-commercial use through la Plateforme.

Sakana AI Drops Image Models to Generate Japan’s Traditional Ukiyo-e Artwork - Tokyo-based startup Sakana AI has introduced two new AI image-generation models, Evo-Ukiyoe and Evo-Nishikie, focused on recreating Japan's historic ukiyo-e art form. Available on Hugging Face, these models generate text-to-image and image-to-image outputs reminiscent of traditional ukiyo-e, even modernizing it with contemporary elements. Evo-Ukiyoe generates ukiyo-e style images from text prompts, while Evo-Nishikie colorizes monochrome prints. Developed using an evolutionary model merging technique and fine-tuned with over 24,000 artworks from Ritsumeikan University, these models aim to revive and globally popularize Japanese cultural heritage.

Adobe announces new AI features for Illustrator and Photoshop - Adobe is pushing the envelope with AI, enhancing Illustrator and Photoshop capabilities. Illustrator now boasts Generative Shape Fill for easy vector fill-ins via text prompts and a Mockup feature for realistic application of designs onto objects. It further simplifies identifying and editing typefaces, coupled with Text to Pattern for effortlessly crafting backgrounds. Photoshop introduces the Selection and Adjustment Brush Tools to streamline repetitive tasks, along with Type Tool and Contextual Taskbar improvements, and integrates Adobe Firefly for AI-generated content. Adobe emphasizes AI training with consent for stock sharing.

Stability AI Announces Stable Video 4D - Stability AI has launched Stable Video 4D, an innovative video-to-video generation model that allows users to upload a single video and receive dynamic novel-view videos from eight new angles. This model enhances versatility and creativity in video production.

AI, Go Fetch! New NVIDIA NeMo Retriever Microservices Boost LLM Accuracy and Throughput - NVIDIA introduces four new NeMo Retriever NIM inference microservices aimed at enhancing the accuracy of AI applications by allowing developers to efficiently utilize proprietary data. These microservices, integrated with the Llama 3.1 model collection microservices, are designed to scale up AI workflows toward more autonomous operation with minimal supervision. NeMo Retriever connects custom models to various business data, enabling precise information retrieval for superior AI responsiveness. This is key for applications such as AI agents, chatbots, security analysis, and complex data insight extraction. NeMo Retriever achieves fewer inaccurate responses by 30% compared to alternatives, providing significantly better results with a combination of embedding (for vectorizing data) and reranking (for scoring data relevance).

The MongoDB AI Applications Program (MAAP) is Now Available - MongoDB has launched the MongoDB AI Applications Program (MAAP), aimed at assisting organizations in adopting and integrating AI into their operations. MAAP provides a comprehensive set of resources, including reference architectures, technology stacks, professional services, and support to help customers deploy AI applications efficiently. Recognizing challenges such as handling multi-modal data structures and the need for specialized skills, MAAP addresses these through its ecosystem. MAAP's open architecture, which leverages MongoDB's data platform, enables customization and flexibility for creating industry-specific AI solutions. The program offers strategic guidance, scalable gen AI solutions, and tools for upskilling teams, positioning itself as an enabler for AI innovation in various sectors.

Kling AI - Kling AI, text to video generator, is now available for global use.

SearchGPT Prototype - SearchGPT, a temporary prototype by OpenAI, aims to enhance AI search by combining AI models with web information for fast, accurate answers and clear source attribution. It allows for conversational follow-ups and is designed to benefit publishers by prominently citing their content. With endorsements from industry leaders like Nicholas Thompson of The Atlantic and Robert Thomson of News Corp, the initiative emphasizes protecting journalism and offering a symbiotic relationship between technology and content. OpenAI plans to integrate the best features of this prototype into ChatGPT, inviting users and publishers to join a waitlist for feedback and improvement.

Improving Model Safety Behavior with Rule-Based Rewards - OpenAI has introduced Rule-Based Rewards (RBRs) to enhance AI model safety by using predefined rules instead of extensive human feedback. This method addresses the inefficiencies of traditional reinforcement learning from human feedback (RLHF) and helps ensure models adhere to safety guidelines. RBRs allow models to respond appropriately to sensitive topics by applying specific rules for different types of responses, such as hard refusals or empathetic soft refusals. This approach reduces the need for continuous human data collection and can be easily updated as safety policies evolve, balancing safety and utility effectively.

GRID Beta Waitlist Open for General Robot Intelligence Development - Scaled Foundations has opened the waitlist for its GRID Beta, a cloud-based IDE for robotics development. GRID supports diverse robot form factors and sensors, enabling comprehensive sensorimotor development and rapid deployment. It uses foundation models for multimodal perception and zero-shot generalization. Developers can train and test models in simulation and deploy them on real robots quickly. The platform aims to simplify and accelerate robotics development with seamless access to advanced AI tools and simulations.

MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens - Announcing the release of 🍃MINT-1T, a pioneering open-source trillion-token dataset, created to advance the training of large-scale multimodal models that process both text and images. This dataset is a marked improvement over predecessors both in size and diversity, incorporating data sources like HTML documents, PDFs, and scientific papers from ArXiv. 🍃MINT-1T is specifically designed to support the construction of models capable of tasks such as captioning and visual question answering. Preliminary model experiments show that models trained on 🍃MINT-1T surpass those using previous datasets.

End Frame Feature in LumaDreamMachine - LumaLabsAI has launched the End Frame feature in its LumaDreamMachine. This feature allows users to provide a desired final frame of a video, and the AI will generate the preceding frames to create a seamless video leading up to that moment.

Check Out My Other Videos:

Claude

Reply

or to participate.