{"id":178,"date":"2025-10-15T11:07:54","date_gmt":"2025-10-15T05:37:54","guid":{"rendered":"https:\/\/dgsthal.in\/blogs\/?p=178"},"modified":"2025-10-15T11:07:54","modified_gmt":"2025-10-15T05:37:54","slug":"small-language-models-slms-lightweight-ai-models-that-run-on-devices","status":"publish","type":"post","link":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/","title":{"rendered":"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><strong>Introduction<\/strong><\/h2>\n\n\n\n<p>In 2025, the world of Artificial Intelligence is witnessing a major shift \u2014 from massive cloud-based models like GPT to <strong>Small Language Models (SLMs)<\/strong> that can run <strong>locally on your device<\/strong>. These models are designed to bring the power of AI closer to the user \u2014 literally in their hands.<\/p>\n\n\n\n<p>In this post, we\u2019ll explore what SLMs are, how they differ from large models, why they matter, and where they\u2019re being used today.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Are Small Language Models (SLMs)?<\/strong><\/h2>\n\n\n\n<p>A <strong>Small Language Model (SLM)<\/strong> is a compact AI model trained on limited data and parameters, optimized for <strong>speed, efficiency, and on-device operation<\/strong>.<br>Unlike massive models that require cloud servers and GPUs, SLMs are lightweight enough to run on <strong>smartphones, laptops, and edge devices<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Example<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Gemini Nano (by Google)<\/strong><\/li>\n\n\n\n<li><strong>Phi-3 (by Microsoft)<\/strong><\/li>\n\n\n\n<li><strong>Llama 3-8B (Meta)<\/strong><\/li>\n\n\n\n<li><strong>Mistral 7B<\/strong><\/li>\n<\/ul>\n\n\n\n<p>These models are smaller in size (typically a few billion parameters) but are fine-tuned for <strong>specific, practical tasks<\/strong> \u2014 like writing summaries, answering questions, or powering chatbots.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why Small Language Models Matter<\/strong><\/h2>\n\n\n\n<p>The rise of SLMs is not just a trend; it\u2019s a shift in how AI is accessed and deployed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. Privacy &amp; Security<\/strong><\/h3>\n\n\n\n<p>Since SLMs can run on-device, your data never leaves it \u2014 ensuring <strong>maximum privacy<\/strong> and <strong>reduced data sharing<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Low Latency<\/strong><\/h3>\n\n\n\n<p>No need for internet or server calls. On-device processing means <strong>instant responses<\/strong> without lag.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3. Cost Efficiency<\/strong><\/h3>\n\n\n\n<p>Running models locally reduces <strong>cloud infrastructure costs<\/strong>, making AI affordable for startups and enterprises alike.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4. Energy Efficiency<\/strong><\/h3>\n\n\n\n<p>Smaller models require less computation power, which means <strong>lower energy consumption<\/strong> \u2014 better for both the device and the planet.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>SLMs vs LLMs: What\u2019s the Difference?<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>Feature<\/strong><\/th><th><strong>Small Language Models (SLMs)<\/strong><\/th><th><strong>Large Language Models (LLMs)<\/strong><\/th><\/tr><\/thead><tbody><tr><td><strong>Size<\/strong><\/td><td>Few billion parameters<\/td><td>Hundreds of billions of parameters<\/td><\/tr><tr><td><strong>Speed<\/strong><\/td><td>Very fast<\/td><td>Slower due to heavy computation<\/td><\/tr><tr><td><strong>Hardware<\/strong><\/td><td>Can run on-device<\/td><td>Needs high-end cloud GPUs<\/td><\/tr><tr><td><strong>Cost<\/strong><\/td><td>Low<\/td><td>High<\/td><\/tr><tr><td><strong>Use Case<\/strong><\/td><td>Summaries, assistants, quick responses<\/td><td>Deep reasoning, coding, creative writing<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>In short \u2014 <strong>SLMs are for everyday use, LLMs are for complex reasoning<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How SLMs Are Powering On-Device AI<\/strong><\/h2>\n\n\n\n<p>SLMs are already embedded into the tools and apps we use daily:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Smartphones<\/strong> \u2013 Text prediction, offline voice assistants (e.g., Gemini Nano on Pixel)<\/li>\n\n\n\n<li><strong>Wearables<\/strong> \u2013 Health recommendations and real-time coaching<\/li>\n\n\n\n<li><strong>Enterprise apps<\/strong> \u2013 Document summarization, quick insights<\/li>\n\n\n\n<li><strong>IoT Devices<\/strong> \u2013 Smart homes and autonomous machines<\/li>\n<\/ul>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>As hardware continues to evolve, SLMs will bridge the gap between <strong>local AI power<\/strong> and <strong>cloud intelligence<\/strong>.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Role of RAG with SLMs<\/strong><\/h2>\n\n\n\n<p>When combined with <strong>Retrieval-Augmented Generation (RAG)<\/strong>, SLMs can access <strong>external knowledge bases<\/strong> to provide <strong>accurate and contextual answers<\/strong> \u2014 even without being massive.<br>This hybrid approach allows devices to <strong>retrieve<\/strong> data from local or private databases and <strong>generate<\/strong> intelligent responses in real-time.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Future of SLMs in 2025 and Beyond<\/strong><\/h2>\n\n\n\n<p>As more tech giants and open-source communities invest in smaller, optimized models, we\u2019ll see:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>SLMs embedded in browsers<\/strong><\/li>\n\n\n\n<li><strong>Offline AI assistants<\/strong><\/li>\n\n\n\n<li><strong>Privacy-first enterprise chatbots<\/strong><\/li>\n\n\n\n<li><strong>Edge AI applications in healthcare and education<\/strong><\/li>\n<\/ul>\n\n\n\n<p>The future isn\u2019t just <strong>bigger models<\/strong>, it\u2019s <strong>smarter, smaller, and closer to you.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p><strong>Small Language Models (SLMs)<\/strong> are redefining the way we experience AI \u2014 shifting from cloud dependency to local autonomy.<br>In 2025, they\u2019re not replacing LLMs but complementing them \u2014 offering a balance between <strong>speed, privacy, and efficiency.<\/strong><br>Whether you\u2019re an AI enthusiast, developer, or enterprise innovator \u2014 understanding SLMs will be key to building the next generation of intelligent, on-device experiences.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction In 2025, the world of Artificial Intelligence is witnessing a major shift \u2014 from massive cloud-based models like GPT to Small Language Models (SLMs) that can run locally on&hellip;<\/p>\n","protected":false},"author":1,"featured_media":179,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1,59],"tags":[6,63,60,11,62,61],"class_list":["post-178","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","category-generative-ai","tag-dgsthal","tag-generative-ai","tag-llm","tag-payal-ganguly","tag-slms","tag-small-language-models"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices - DGsthal<\/title>\n<meta name=\"description\" content=\"Discover how Small Language Models (SLMs) are transforming AI in 2025. Learn how lightweight models like Gemini Nano and Phi-3 run on devices, enabling private, fast, and cost-efficient AI experiences.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices - DGsthal\" \/>\n<meta property=\"og:description\" content=\"Discover how Small Language Models (SLMs) are transforming AI in 2025. Learn how lightweight models like Gemini Nano and Phi-3 run on devices, enabling private, fast, and cost-efficient AI experiences.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\" \/>\n<meta property=\"og:site_name\" content=\"DGsthal\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-15T05:37:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Payal Ganguly\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Payal Ganguly\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\"},\"author\":{\"name\":\"Payal Ganguly\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/#\/schema\/person\/2a5781070f1f8fc37d1d41e043e0c36d\"},\"headline\":\"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices\",\"datePublished\":\"2025-10-15T05:37:54+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\"},\"wordCount\":566,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/#organization\"},\"image\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg\",\"keywords\":[\"Dgsthal\",\"Generative AI\",\"LLM\",\"Payal Ganguly\",\"SLMs\",\"Small Language Models\"],\"articleSection\":[\"Blog\",\"Generative AI\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\",\"url\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\",\"name\":\"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices - DGsthal\",\"isPartOf\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg\",\"datePublished\":\"2025-10-15T05:37:54+00:00\",\"description\":\"Discover how Small Language Models (SLMs) are transforming AI in 2025. Learn how lightweight models like Gemini Nano and Phi-3 run on devices, enabling private, fast, and cost-efficient AI experiences.\",\"breadcrumb\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage\",\"url\":\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg\",\"contentUrl\":\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg\",\"width\":1536,\"height\":1024,\"caption\":\"SLMs on Devices\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/dgsthal.in\/blogs\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/#website\",\"url\":\"https:\/\/dgsthal.in\/blogs\/\",\"name\":\"DGsthal\",\"description\":\"Blog\",\"publisher\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/dgsthal.in\/blogs\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/#organization\",\"name\":\"DGsthal\",\"url\":\"https:\/\/dgsthal.in\/blogs\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/03\/cropped-DGsthal-Logo.png\",\"contentUrl\":\"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/03\/cropped-DGsthal-Logo.png\",\"width\":1920,\"height\":629,\"caption\":\"DGsthal\"},\"image\":{\"@id\":\"https:\/\/dgsthal.in\/blogs\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/dgsthal\/\",\"https:\/\/www.instagram.com\/dgsthal_it_solutions\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/#\/schema\/person\/2a5781070f1f8fc37d1d41e043e0c36d\",\"name\":\"Payal Ganguly\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/dgsthal.in\/blogs\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d381aab0a2817410af89dbfd39bac693?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d381aab0a2817410af89dbfd39bac693?s=96&d=mm&r=g\",\"caption\":\"Payal Ganguly\"},\"sameAs\":[\"https:\/\/dgsthal.in\/blogs\",\"https:\/\/www.linkedin.com\/in\/payal-ganguly-447436285\/\"],\"url\":\"https:\/\/dgsthal.in\/blogs\/author\/payalganguly\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices - DGsthal","description":"Discover how Small Language Models (SLMs) are transforming AI in 2025. Learn how lightweight models like Gemini Nano and Phi-3 run on devices, enabling private, fast, and cost-efficient AI experiences.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/","og_locale":"en_US","og_type":"article","og_title":"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices - DGsthal","og_description":"Discover how Small Language Models (SLMs) are transforming AI in 2025. Learn how lightweight models like Gemini Nano and Phi-3 run on devices, enabling private, fast, and cost-efficient AI experiences.","og_url":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/","og_site_name":"DGsthal","article_published_time":"2025-10-15T05:37:54+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg","type":"image\/jpeg"}],"author":"Payal Ganguly","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Payal Ganguly","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#article","isPartOf":{"@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/"},"author":{"name":"Payal Ganguly","@id":"https:\/\/dgsthal.in\/blogs\/#\/schema\/person\/2a5781070f1f8fc37d1d41e043e0c36d"},"headline":"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices","datePublished":"2025-10-15T05:37:54+00:00","mainEntityOfPage":{"@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/"},"wordCount":566,"commentCount":0,"publisher":{"@id":"https:\/\/dgsthal.in\/blogs\/#organization"},"image":{"@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage"},"thumbnailUrl":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg","keywords":["Dgsthal","Generative AI","LLM","Payal Ganguly","SLMs","Small Language Models"],"articleSection":["Blog","Generative AI"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/","url":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/","name":"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices - DGsthal","isPartOf":{"@id":"https:\/\/dgsthal.in\/blogs\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage"},"image":{"@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage"},"thumbnailUrl":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg","datePublished":"2025-10-15T05:37:54+00:00","description":"Discover how Small Language Models (SLMs) are transforming AI in 2025. Learn how lightweight models like Gemini Nano and Phi-3 run on devices, enabling private, fast, and cost-efficient AI experiences.","breadcrumb":{"@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#primaryimage","url":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg","contentUrl":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/10\/SLMs-on-Devices.jpg","width":1536,"height":1024,"caption":"SLMs on Devices"},{"@type":"BreadcrumbList","@id":"https:\/\/dgsthal.in\/blogs\/small-language-models-slms-lightweight-ai-models-that-run-on-devices\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dgsthal.in\/blogs\/"},{"@type":"ListItem","position":2,"name":"Small Language Models (SLMs) \u2013 Lightweight AI Models That Run on Devices"}]},{"@type":"WebSite","@id":"https:\/\/dgsthal.in\/blogs\/#website","url":"https:\/\/dgsthal.in\/blogs\/","name":"DGsthal","description":"Blog","publisher":{"@id":"https:\/\/dgsthal.in\/blogs\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dgsthal.in\/blogs\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/dgsthal.in\/blogs\/#organization","name":"DGsthal","url":"https:\/\/dgsthal.in\/blogs\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dgsthal.in\/blogs\/#\/schema\/logo\/image\/","url":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/03\/cropped-DGsthal-Logo.png","contentUrl":"https:\/\/dgsthal.in\/blogs\/wp-content\/uploads\/2025\/03\/cropped-DGsthal-Logo.png","width":1920,"height":629,"caption":"DGsthal"},"image":{"@id":"https:\/\/dgsthal.in\/blogs\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/dgsthal\/","https:\/\/www.instagram.com\/dgsthal_it_solutions\/"]},{"@type":"Person","@id":"https:\/\/dgsthal.in\/blogs\/#\/schema\/person\/2a5781070f1f8fc37d1d41e043e0c36d","name":"Payal Ganguly","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/dgsthal.in\/blogs\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/d381aab0a2817410af89dbfd39bac693?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d381aab0a2817410af89dbfd39bac693?s=96&d=mm&r=g","caption":"Payal Ganguly"},"sameAs":["https:\/\/dgsthal.in\/blogs","https:\/\/www.linkedin.com\/in\/payal-ganguly-447436285\/"],"url":"https:\/\/dgsthal.in\/blogs\/author\/payalganguly\/"}]}},"_links":{"self":[{"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/posts\/178","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/comments?post=178"}],"version-history":[{"count":1,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/posts\/178\/revisions"}],"predecessor-version":[{"id":180,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/posts\/178\/revisions\/180"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/media\/179"}],"wp:attachment":[{"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/media?parent=178"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/categories?post=178"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dgsthal.in\/blogs\/wp-json\/wp\/v2\/tags?post=178"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}