Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum | By The Digital Insider

Phi-3 and OpenELM, two major small model releases this week.

Created Using Ideogram

Next Week in The Sequence:

  • Edge 391: Our series about autonomous agents continues with the fascinating topic of function calling. We explore UCBerkeley’s research on LLMCompiler for function calling and we review the PhiData framework for building agents.

  • Edge 392: We dive into RAFT, UC Berkeley’s technique for improving RAG scenarios.

You can subscribed to The Sequence below:

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📝 Editorial: Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum

Last year, Microsoft coined the term 'small language model' (SLM) following the publication of the influential paper 'Textbooks Are All You Need', which introduced the initial Phi model. Since then, there has been a tremendous market uptake in this area, and SLMs are starting to make inroads as one of the next big things in generative AI.

The case for SLMs is pretty clear. Massively large foundation models are likely to dominate generalist use cases, but they remain incredibly expensive to run, plagued with hallucinations, security vulnerabilities, and reliability issues when applied in domain-specific scenarios. Add to that environments such as mobile or IoT, which are computation-constrained by definition. SLMs are likely to fill that gap in the market with hyper-specialized models that are more secure and affordable to execute. This week we had two major developments in the SLM space:

  1. Microsoft released the Phi-3 family of models. Although not that small anymore at 3.8 billion parameters, Phi-3 continues to outperform much larger alternatives. The model also boasts an impressive 128k token window. Again, not that small, but small enough ;)

  2. Apple open-sourced OpenELM, a family of LLMs optimized for mobile scenarios. Obviously, OpenELM has raised speculations about Apple’s ambitions to incorporate native LLM capabilities in the iPhone.

Large foundation models have commanded the narrative in generative AI and will continue to do so while the scaling laws hold. But SLMs are certainly going to capture an important segment of the market. After all, nobody likes a know-it-all ;)"

🔎 ML Research

Phi-3

Microsoft Research published the technical report of Phi-3, their famous small language model that excel at match and computer science task. The new models are not that small anymore with phi-3-mini at 3.8B parameters and phi-3-small and phi-3-medium at 7B and 14B parameters respective —> Read more.

The Instruction Hierarchy

OpenAI published a paper introducing the instruction hierarchy which defines the model behavior upon confronting conflicting instructions. The method has profound implications in LLM security scenarios such as preventing prompt injections, jailbreaks and other attacks —> Read more.

MAIA

Researchers from MIT published a paper introducing Multimodal Automated Interpretability Agent (MAIA), an AI agent that can design experiments to answer queries of other AI models. The method is an interesting approach to interpretability to prove generative AI models to undestand their behavior —> Read more.

LayerSkip

Meta AI Research published a paper introducing LayerSkip, a method for accelerated inference in LLMs. The method introduces modification in both the pretraining and inference process of LLMs as well as a novel decoding solution —> Read more.

Gecko

Google DeepMind published a paper introducing Gecko, a new benchmark for text to image models. Gecko is structured as a skill-based benchmark that can discriminate models across different human templates —> Read more.

🤖 Cool AI Tech Releases

OpenELM

Apple open sourced OpenELM, a family of small LLMs optimized to run on devices —> Read more.

Artic

Snowflake open sourced Artic, an MoE model specialized in enterprise workloads such as SQL, coding and RAG —> Read more.

Meditron

Researchers from EPFL’s School of Computer and Communication Sciences and Yale School of Medicine released Meditron, an open source family of models tailored to the medical field —> Read more.

Cohere Toolkit

Cohere released a new toolking to accelerate generative AI app development —> Read more.

Penzai

Google DeepMind open sourced Penzai, a research tookit for editing and visualizing neural networks and inject custom logic —> Read more.

🛠 Real World ML

Fixing Code Builds

Google discusses how they trained a model to predict and fix build fixes —> Read more.

Data Science Teams at Lyft

Lyft shared some of the best practices and processes followed for building its data science teams —> Read more.

📡AI Radar

TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


#Adobe, #Agent, #Agents, #Ai, #AIAdoption, #AiAgent, #AIModels, #AiPlatform, #AIResearch, #App, #AppDevelopment, #Apple, #Approach, #AutonomousAgents, #Behavior, #Benchmark, #Billion, #Biotech, #Building, #Capture, #Code, #Coding, #Cognition, #Communication, #Computation, #Computer, #ComputerScience, #Data, #DataScience, #DeepMind, #Design, #Development, #Developments, #Devices, #Edge, #Editing, #Editorial, #ElonMusk, #Enterprise, #Excel, #Foundation, #Framework, #Funding, #Gap, #Generative, #GenerativeAi, #Hallucinations, #How, #Human, #ImageGeneration, #Impact, #Inference, #Interpretability, #Investments, #IoT, #IPhone, #Issues, #It, #Language, #LanguageModel, #Llm, #LLMs, #Logic, #Medical, #Medicine, #Meta, #Method, #Microsoft, #Mit, #Ml, #Mobile, #Model, #MoE, #Multimodal, #Musk, #Networks, #Neural, #NeuralNetworks, #Nvidia, #One, #OpenSource, #Openai, #Other, #PAID, #Paper, #PHI, #Phi3, #Platform, #Process, #RAFT, #RAG, #Read, #Reliability, #Report, #Research, #Review, #Sales, #Salesforce, #School, #Science, #Security, #SecurityVulnerabilities, #SLM, #SLMs, #SnorkelAI, #Space, #SQL, #Startup, #Stealth, #StealthMode, #Teams, #Tech, #Templates, #Text, #Uc, #Vulnerabilities, #Work, #XAI
Published on The Digital Insider at https://is.gd/gPoOuM.

Comments