{"id":73375,"date":"2025-03-24T21:00:01","date_gmt":"2025-03-24T13:00:01","guid":{"rendered":"https:\/\/www.hongkiat.com\/blog\/?p=73375"},"modified":"2025-03-11T18:57:11","modified_gmt":"2025-03-11T10:57:11","slug":"local-llm-setup-optimization-lm-studio","status":"publish","type":"post","link":"https:\/\/www.hongkiat.com\/blog\/local-llm-setup-optimization-lm-studio\/","title":{"rendered":"Running Large Language Models (LLMs) Locally with LM Studio"},"content":{"rendered":"<p>Running large language models (LLMs) locally with tools like <a href=\"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/\">LM Studio<\/a> or <a href=\"https:\/\/www.hongkiat.com\/blog\/ollama-ai-setup-guide\/\">Ollama<\/a> has many advantages, including privacy, lower costs, and offline availability. However, these models can be resource-intensive and require proper optimization to run efficiently.<\/p>\n<p>In this article, we will walk you through optimizing your setup, and in this case, we will be using LM Studio to make things a bit easier with its user-friendly interface and easy installation. We\u2019ll be covering model selection and some performance tweaks to help you get the most out of your LLM setup.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg\" alt=\"Optimizing Large Language Models Locally with LM Studio\" width=\"1000\" height=\"600\"><\/figure>\n<p>I assume that you have LM Studio installed; otherwise, please check out our article: <a href=\"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/\">How to Run LLM Locally on Your Computer with LM Studio<\/a>.<\/p>\n<p>Once you have it installed and running on your computer, we can get started:<\/p>\n<hr>\n<h2>Selecting the Right Model<\/h2>\n<p>Selecting the right Large Language Model (LLM) is important to get efficient and accurate results. Just like choosing the right tool for a job, different LLMs are better suited for different tasks.<\/p>\n<p>There are a few things that we can look for when selecting models:<\/p>\n<h3>1. The Model Parameters<\/h3>\n<p>Think of parameters as the \u201cknobs\u201d and \u201cdials\u201d inside the LLM that are adjusted during training. They determine how the model understands and generates text.<\/p>\n<p>The number of parameters is often used to describe the \u201csize\u201d of a model. You\u2019ll commonly see models referred to as 2B (2 billion parameters), 7B (7 billion parameters), 14B, and so on.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/ollama-model-parameter-selection.jpg\" alt=\"Ollama model parameter selection interface\" width=\"1000\" height=\"513\"><figcaption>Model parameter selection in Ollama<\/figcaption><\/figure>\n<p>A model with more parameters generally has a greater capacity to learn complex patterns and relationships in language, but it typically also requires more RAM and processing power to run efficiently.<\/p>\n<p>Here are some practical approaches you can take when selecting a model based on your system\u2019s resources:<\/p>\n<table>\n<thead>\n<tr>\n<th>Resource Level<\/th>\n<th>RAM<\/th>\n<th>Recommended Models<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Limited Resources<\/strong><\/td>\n<td>Less than 8GB<\/td>\n<td>Smaller models (e.g., 4B or less)<\/td>\n<\/tr>\n<tr>\n<td><strong>Moderate Resources<\/strong><\/td>\n<td>8GB \u2013 16GB<\/td>\n<td>Mid-range models (e.g., 7B to 13B parameters)<\/td>\n<\/tr>\n<tr>\n<td><strong>Ample Resources<\/strong><\/td>\n<td>16GB+ with dedicated GPU<\/td>\n<td>Larger models (e.g., 30B parameters and above)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Fortunately, as we can see below, <strong>LM Studio<\/strong> will automatically highlight the most optimal model based on your system\u2019s resources, allowing you to simply select it.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lmstudio-model-dropdown-suggestion.jpg\" alt=\"LM Studio model selection interface with system recommendations\" width=\"1000\" height=\"640\"><\/figure>\n<h3>2. The Model Characteristics<\/h3>\n<p>While a model with billions of parameters plays a role, it\u2019s not the sole determinant of performance or resource requirements. Different models are designed with different architectures and training data, which significantly impacts their capabilities.<\/p>\n<p>If you need a model for general-purpose tasks, the following models might be good choices:<\/p>\n<ul>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/lmstudio-community\/DeepSeek-R1-Distill-Llama-8B-GGUF\">Llama 3.2<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/lmstudio-community\/Mistral-7B-Instruct-v0.3-GGUF\">Mistral<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/ollama.com\/library\/deepseek-r1\">Deepseek R1<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/lmstudio-community\/gemma-2-2b-it-GGUF\">Gemma<\/a><\/li>\n<\/ul>\n<p>If you\u2019re focused on coding, a code-focused model would be a better fit, such as:<\/p>\n<ul>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/lmstudio-community\/Qwen2.5-Coder-7B-Instruct-GGUF\">Qwen 2.5 Coder<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/bartowski\/stable-code-instruct-3b-GGUF\">StableCode<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/lmstudio-community\/Yi-Coder-9B-Chat-GGUF\">Yi Coder<\/a><\/li>\n<\/ul>\n<p>If you need to process images, you should use an LLM with multimodal capabilities, such as:<\/p>\n<ul>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/second-state\/Llava-v1.5-7B-GGUF\">Llava<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/ollama.com\/library\/llama3.2-vision\">Llama 3.2 Vision<\/a><\/li>\n<li><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/github.com\/deepseek-ai\/Janus\">Janus<\/a><\/li>\n<\/ul>\n<p>The best model for you depends on your specific use case and requirements. If you\u2019re unsure, you can always start with a general-purpose model and adjust as needed.<\/p>\n<h3>3. Quantization<\/h3>\n<p>Another way to optimize your LLM setup is by using <strong>quantized models<\/strong>.<\/p>\n<p>Imagine you have a huge collection of photos, and each photo takes up a lot of space on your hard drive. <strong>Quantization is like compressing those photos to save space<\/strong>. You might lose a tiny bit of image quality, but you gain a lot of additional free space.<\/p>\n<p>Quantization levels are often described by the number of bits used to represent each value. <strong>Lower bit values, like going from 8-bit to 4-bit, result in higher compression and thus lower memory usage.<\/strong><\/p>\n<p>In <strong>LM Studio<\/strong>, you can find some quantized models, such as <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/lmstudio-community\/Llama-3.3-70B-Instruct-GGUF\">Llama 3.3<\/a> and <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/model.lmstudio.ai\/download\/NousResearch\/Hermes-3-Llama-3.2-3B-GGUF\">Hermes 3<\/a>.<\/p>\n<p>You\u2019ll find several download options for these models.<\/p>\n<figure><img decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-quantization.jpg\" alt=\"LM Studio model quantization options comparison\"><\/figure>\n<p>As shown above, the quantized model with 4-bit quantization (marked with <code>Q4_K_M<\/code>) is smaller than the 8-bit version (marked with <code>Q8_0<\/code>) by more than 1 GB.<\/p>\n<p>If you\u2019re experiencing memory issues, consider using quantized models to reduce memory usage.<\/p>\n<hr>\n<h2>Performance Tweaks<\/h2>\n<p>LM Studio offers a variety of settings that allow you to <strong>fine-tune your selected model\u2019s performance<\/strong>.<\/p>\n<p>These settings give you control over how the model uses your computer\u2019s resources and generates text, enabling you to optimize for speed, memory usage, or specific task requirements.<\/p>\n<p>You can find these settings in the <q><strong>My Models<\/strong><\/q> section within each downloaded model.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-my-models.jpg\" alt=\"LM Studio My Models section interface\" width=\"1000\" height=\"600\"><\/figure>\n<p>Let\u2019s explore some of the key options:<\/p>\n<h3>Context Length<\/h3>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-context-length.jpg\" alt=\"LM Studio context length settings\" width=\"1000\" height=\"330\"><\/figure>\n<p>This setting determines how much of the previous conversation the model \u201cremembers\u201d when generating a response. A longer context length allows the model to maintain coherence over longer exchanges but requires more memory.<\/p>\n<p>If you\u2019re working on shorter tasks or have limited RAM, reducing the context length can improve performance.<\/p>\n<h3><strong>GPU Offload<\/strong><\/h3>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-gpu-offload.jpg\" alt=\"LM Studio GPU offload settings\" width=\"1000\" height=\"330\"><\/figure>\n<p>This setting enables you to leverage your GPU\u2019s power to accelerate inference. If you have a dedicated graphics card, enabling GPU offload can significantly boost performance.<\/p>\n<h3>CPU Thread Pool Size<\/h3>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-cpu-thread.jpg\" alt=\"LM Studio CPU thread pool size settings\" width=\"1000\" height=\"330\"><\/figure>\n<p>This setting determines how many CPU cores are utilized for processing. Increasing the thread pool size can enhance performance, particularly on multi-core processors.<\/p>\n<p>You can experiment to find the optimal configuration for your system.<\/p>\n<h3>K Cache\/V Cache Quantization Type<\/h3>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-kv-cache.jpg\" alt=\"LM Studio K Cache and V Cache quantization settings\" width=\"1000\" height=\"330\"><\/figure>\n<p>These settings determine how the model\u2019s <q>key<\/q> and <q>value<\/q> caches are quantized. Similar to model quantization, cache quantization reduces memory usage but may slightly impact accuracy.<\/p>\n<p>You can experiment with different quantization levels to find the optimal balance between performance and accuracy.<\/p>\n<h3>Limit Response Length<\/h3>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/lm-studio-limit-response-length.jpg\" alt=\"LM Studio response length limit settings\" width=\"1000\" height=\"330\"><\/figure>\n<p>This setting controls the maximum number of tokens (roughly equivalent to words or sub-word units) the model can generate in a single response. It directly impacts performance, primarily in terms of processing time and resource usage.<\/p>\n<p>The main trade-off of limiting response length is that the model\u2019s responses may be truncated or incomplete if they exceed the specified limit. This could be problematic if you require detailed or comprehensive answers.<\/p>\n<hr>\n<h2>Wrapping up<\/h2>\n<p>Running large language models locally provides a powerful tool for various tasks, from text generation to answering questions and even coding assistance.<\/p>\n<p>However, with limited resources, optimizing your LLM setup through careful model selection and performance tuning is essential. By choosing the appropriate model and fine-tuning its settings, you can ensure efficient and effective operation on your system.<\/p>","protected":false},"excerpt":{"rendered":"<p>Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline availability. However, these models can be resource-intensive and require proper optimization to run efficiently. In this article, we will walk you through optimizing your setup, and in this case, we will be using&hellip;<\/p>\n","protected":false},"author":113,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3398],"tags":[3545],"topic":[],"class_list":["entry-content","is-maxi"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.8 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Running Large Language Models (LLMs) Locally with LM Studio - Hongkiat<\/title>\n<meta name=\"description\" content=\"Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hongkiat.com\/blog\/?p=73375\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Running Large Language Models (LLMs) Locally with LM Studio\" \/>\n<meta property=\"og:description\" content=\"Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hongkiat.com\/blog\/?p=73375\" \/>\n<meta property=\"og:site_name\" content=\"Hongkiat\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/hongkiatcom\" \/>\n<meta property=\"article:published_time\" content=\"2025-03-24T13:00:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg\" \/>\n<meta name=\"author\" content=\"Thoriq Firdaus\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@tfirdaus\" \/>\n<meta name=\"twitter:site\" content=\"@hongkiat\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Thoriq Firdaus\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375\"},\"author\":{\"name\":\"Thoriq Firdaus\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/person\\\/e7948c7a175d211496331e4b6ce55807\"},\"headline\":\"Running Large Language Models (LLMs) Locally with LM Studio\",\"datePublished\":\"2025-03-24T13:00:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375\"},\"wordCount\":1013,\"publisher\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/local-llm-setup-optimization-lm-studio\\\/cover.jpg\",\"keywords\":[\"Artificial Intelligence\"],\"articleSection\":[\"Internet\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375\",\"name\":\"Running Large Language Models (LLMs) Locally with LM Studio - Hongkiat\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/local-llm-setup-optimization-lm-studio\\\/cover.jpg\",\"datePublished\":\"2025-03-24T13:00:01+00:00\",\"description\":\"Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#primaryimage\",\"url\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/local-llm-setup-optimization-lm-studio\\\/cover.jpg\",\"contentUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/local-llm-setup-optimization-lm-studio\\\/cover.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?p=73375#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Running Large Language Models (LLMs) Locally with LM Studio\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\",\"name\":\"Hongkiat\",\"description\":\"Tech and Design Tips\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\",\"name\":\"Hongkiat.com\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/wp-content\\\/uploads\\\/hkdc-logo-rect-yoast.jpg\",\"contentUrl\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/wp-content\\\/uploads\\\/hkdc-logo-rect-yoast.jpg\",\"width\":1200,\"height\":799,\"caption\":\"Hongkiat.com\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/hongkiatcom\",\"https:\\\/\\\/x.com\\\/hongkiat\",\"https:\\\/\\\/www.pinterest.com\\\/hongkiat\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/person\\\/e7948c7a175d211496331e4b6ce55807\",\"name\":\"Thoriq Firdaus\",\"description\":\"Thoriq is a writer for Hongkiat.com with a passion for web design and development. He is the author of Responsive Web Design by Examples, where he covered his best approaches in developing responsive websites quickly with a framework.\",\"sameAs\":[\"https:\\\/\\\/thoriq.com\",\"https:\\\/\\\/x.com\\\/tfirdaus\"],\"jobTitle\":\"Web Developer\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/author\\\/thoriq\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Running Large Language Models (LLMs) Locally with LM Studio - Hongkiat","description":"Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hongkiat.com\/blog\/?p=73375","og_locale":"en_US","og_type":"article","og_title":"Running Large Language Models (LLMs) Locally with LM Studio","og_description":"Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline","og_url":"https:\/\/www.hongkiat.com\/blog\/?p=73375","og_site_name":"Hongkiat","article_publisher":"https:\/\/www.facebook.com\/hongkiatcom","article_published_time":"2025-03-24T13:00:01+00:00","og_image":[{"url":"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg","type":"","width":"","height":""}],"author":"Thoriq Firdaus","twitter_card":"summary_large_image","twitter_creator":"@tfirdaus","twitter_site":"@hongkiat","twitter_misc":{"Written by":"Thoriq Firdaus"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#article","isPartOf":{"@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375"},"author":{"name":"Thoriq Firdaus","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/person\/e7948c7a175d211496331e4b6ce55807"},"headline":"Running Large Language Models (LLMs) Locally with LM Studio","datePublished":"2025-03-24T13:00:01+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375"},"wordCount":1013,"publisher":{"@id":"https:\/\/www.hongkiat.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#primaryimage"},"thumbnailUrl":"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg","keywords":["Artificial Intelligence"],"articleSection":["Internet"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375","url":"https:\/\/www.hongkiat.com\/blog\/?p=73375","name":"Running Large Language Models (LLMs) Locally with LM Studio - Hongkiat","isPartOf":{"@id":"https:\/\/www.hongkiat.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#primaryimage"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#primaryimage"},"thumbnailUrl":"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg","datePublished":"2025-03-24T13:00:01+00:00","description":"Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline","breadcrumb":{"@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hongkiat.com\/blog\/?p=73375"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#primaryimage","url":"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg","contentUrl":"https:\/\/assets.hongkiat.com\/uploads\/local-llm-setup-optimization-lm-studio\/cover.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.hongkiat.com\/blog\/?p=73375#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hongkiat.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Running Large Language Models (LLMs) Locally with LM Studio"}]},{"@type":"WebSite","@id":"https:\/\/www.hongkiat.com\/blog\/#website","url":"https:\/\/www.hongkiat.com\/blog\/","name":"Hongkiat","description":"Tech and Design Tips","publisher":{"@id":"https:\/\/www.hongkiat.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hongkiat.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hongkiat.com\/blog\/#organization","name":"Hongkiat.com","url":"https:\/\/www.hongkiat.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.hongkiat.com\/blog\/wp-content\/uploads\/hkdc-logo-rect-yoast.jpg","contentUrl":"https:\/\/www.hongkiat.com\/blog\/wp-content\/uploads\/hkdc-logo-rect-yoast.jpg","width":1200,"height":799,"caption":"Hongkiat.com"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/hongkiatcom","https:\/\/x.com\/hongkiat","https:\/\/www.pinterest.com\/hongkiat\/"]},{"@type":"Person","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/person\/e7948c7a175d211496331e4b6ce55807","name":"Thoriq Firdaus","description":"Thoriq is a writer for Hongkiat.com with a passion for web design and development. He is the author of Responsive Web Design by Examples, where he covered his best approaches in developing responsive websites quickly with a framework.","sameAs":["https:\/\/thoriq.com","https:\/\/x.com\/tfirdaus"],"jobTitle":"Web Developer","url":"https:\/\/www.hongkiat.com\/blog\/author\/thoriq\/"}]}},"jetpack_featured_media_url":"https:\/\/","jetpack_shortlink":"https:\/\/wp.me\/p4uxU-j5t","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/73375","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/users\/113"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/comments?post=73375"}],"version-history":[{"count":1,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/73375\/revisions"}],"predecessor-version":[{"id":73376,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/73375\/revisions\/73376"}],"wp:attachment":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/media?parent=73375"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/categories?post=73375"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/tags?post=73375"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/topic?post=73375"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}