{"id":73047,"date":"2024-11-13T21:00:52","date_gmt":"2024-11-13T13:00:52","guid":{"rendered":"https:\/\/www.hongkiat.com\/blog\/?p=73047"},"modified":"2024-11-08T19:15:18","modified_gmt":"2024-11-08T11:15:18","slug":"run-llm-locally-lm-studio","status":"publish","type":"post","link":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/","title":{"rendered":"How to Run LLM Locally on Your Computer with LM Studio"},"content":{"rendered":"<p>Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. <strong><a href=\"https:\/\/lmstudio.ai\" target=\"_blank\" rel=\"noopener noreferrer\">LM Studio<\/a><\/strong> changes this by providing a desktop app that lets you run these models directly on your local computer.<\/p>\n<p>It is compatible with Windows, macOS, and Linux, and its friendly GUI makes it easier to run LLMs, even for people who aren\u2019t familiar with technical setups. It\u2019s also a great option for privacy because all queries, chats, and data inputs are processed locally without any data being sent to the cloud.<\/p>\n<p>Let\u2019s see how it works.<\/p>\n<h2>System Requirements<\/h2>\n<p>To run LLM models smoothly on your device, make sure your setup meets these requirements:<\/p>\n<ul>\n<li><strong>PC (Windows\/Linux)<\/strong>: A processor supporting AVX2 (standard on newer PCs) and an NVIDIA or AMD GPU.<\/li>\n<li><strong>macOS<\/strong>: Requires Apple Silicon (M1\/M2\/M3). Intel-based Macs are not supported.<\/li>\n<li><strong>Memory<\/strong>: At least 16 GB RAM is ideal, though 8 GB may work if you use smaller models and context sizes.<\/li>\n<li><strong>Internet<\/strong>: A stable connection is recommended for downloading models.<\/li>\n<\/ul>\n<h2>Installation<\/h2>\n<p>To get started, <a href=\"https:\/\/lmstudio.ai\/download\" target=\"_blank\" rel=\"noopener noreferrer\">download LM Studio<\/a> for your platform.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg\" alt=\"LM Studio download page with platform selection options\" width=\"750\" height=\"480\"><\/figure>\n<p>After downloading, follow the installation steps to launch the app. You\u2019ll see a familiar chat interface with a text box, similar to most AI chat applications, as shown below:<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-chat-interface.jpg\" alt=\"LM Studio chat interface screen\" width=\"750\" height=\"480\"><\/figure>\n<p>Before you can start using it, you need to download and load a model.<\/p>\n<h2>What is a Model?<\/h2>\n<p>A model in this context is a pre-trained <em>algorithm<\/em> that can perform a variety of natural language processing tasks. The model is trained on a large dataset of text and learns to predict the next word in a sentence, enabling it to generate coherent and relevant text based on your input.<\/p>\n<p>There are many different models available, each with specific strengths. Some models are better at generating creative text, while others excel at factual information or shorter responses.<\/p>\n<p>For example, models like GPT-3, <a href=\"https:\/\/ai.meta.com\/blog\/meta-llama-3\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Llama-3<\/a>, and <a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-phi-3-redefining-whats-possible-with-slms\/\" target=\"_blank\" rel=\"noopener noreferrer\">Phi-3<\/a> generate creative and engaging text, while <a href=\"https:\/\/github.com\/01-ai\/Yi-Coder\" target=\"_blank\" rel=\"noopener noreferrer\">Yi Coder<\/a> is trained on code and is better at generating code snippets.<\/p>\n<h2>Load a Model<\/h2>\n<p><strong>LM Studio<\/strong> supports a variety of models, including GPT-3, Llama-3, Phi-3, and more. You can easily download models from the <strong>\u201cDiscover\u201d<\/strong> section in the sidebar. Here, you will see a list of available models, their parameter sizes, and their specializations.<\/p>\n<p>Select a model based on your needs. For example, if you want to generate creative text, download a model like Llama-3. If you need code snippets, try Yi Coder. Larger models require more resources, so choose a smaller model if your computer has limited power.<\/p>\n<p>In this example, I\u2019ll download <strong>Llama-3<\/strong> with 8B parameters. Once you click the download button, the model will begin downloading.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-discover-models.jpg\" alt=\"LM Studio discover models section with model options\" width=\"750\" height=\"480\"><\/figure>\n<p>After downloading, load the model by clicking on the <strong>\u201cLoad Model\u201d<\/strong> button in the <strong>\u201cChat\u201d<\/strong> section and selecting the model you downloaded.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-load-model.jpg\" alt=\"LM Studio load model interface\" width=\"750\" height=\"250\"><\/figure>\n<p>Once the model is loaded, start using it to generate text. Simply type your input in the text box and press enter. It can handle facts or general knowledge and is useful for creative writing, brainstorming, or generating ideas.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-chat-response.jpg\" alt=\"LM Studio chat response example\" width=\"750\" height=\"480\"><\/figure>\n<h2>Chat with Documents<\/h2>\n<p>Since version 0.3, LM Studio offers a <strong>Chat with Documents<\/strong> feature, allowing you to upload a document to the conversation. This is useful for generating text based on a specific document or providing extra context to the model.<\/p>\n<p>For example, I\u2019ll upload <a href=\"https:\/\/www.gutenberg.org\/ebooks\/1513\" target=\"_blank\" rel=\"noopener noreferrer\">the Romeo and Juliet book from Project Gutenberg<\/a> and ask a couple of questions.<\/p>\n<ol>\n<li>Who are the main characters in the story?<\/li>\n<li>What is the main conflict in the story?<\/li>\n<\/ol>\n<p><strong>LM Studio<\/strong> will gather information from the document and provide answers to your questions.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-rag.jpg\" alt=\"LM Studio document chat feature example with Romeo and Juliet text\" width=\"750\" height=\"480\"><\/figure>\n<p>Currently, this feature is experimental, meaning it may not always work perfectly. Providing as much context in your query as possible-specific terms, ideas, and expected content-will increase the chances of accurate responses. Experimentation will help you find what works best.<\/p>\n<p>Overall, I\u2019m happy with the results so far. It can answer questions accurately.<\/p>\n<h2>Wrapping Up<\/h2>\n<p><strong>LM Studio<\/strong> is a valuable tool for running LLM models locally on your computer, and we\u2019ve explored some features like using it as a chat assistant and summarizing documents. These features can boost productivity and creativity. If you\u2019re a developer, LM Studio can also run models specifically tuned for generating code.<\/p>","protected":false},"excerpt":{"rendered":"<p>Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing a desktop app that lets you run these models directly on your local computer. It is compatible with Windows, macOS, and Linux, and its friendly GUI makes it easier to run LLMs,&hellip;<\/p>\n","protected":false},"author":113,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3397],"tags":[],"topic":[],"class_list":["entry-content","is-maxi"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.8 (Yoast SEO v27.6) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>How to Run LLM Locally on Your Computer with LM Studio - Hongkiat<\/title>\n<meta name=\"description\" content=\"Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Run LLM Locally on Your Computer with LM Studio\" \/>\n<meta property=\"og:description\" content=\"Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/\" \/>\n<meta property=\"og:site_name\" content=\"Hongkiat\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/hongkiatcom\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-13T13:00:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg\" \/>\n<meta name=\"author\" content=\"Thoriq Firdaus\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@tfirdaus\" \/>\n<meta name=\"twitter:site\" content=\"@hongkiat\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Thoriq Firdaus\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/\"},\"author\":{\"name\":\"Thoriq Firdaus\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/person\\\/e7948c7a175d211496331e4b6ce55807\"},\"headline\":\"How to Run LLM Locally on Your Computer with LM Studio\",\"datePublished\":\"2024-11-13T13:00:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/\"},\"wordCount\":713,\"publisher\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/run-llm-locally-lm-studio\\\/lm-studio-download-page.jpg\",\"articleSection\":[\"Desktop\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/\",\"name\":\"How to Run LLM Locally on Your Computer with LM Studio - Hongkiat\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/run-llm-locally-lm-studio\\\/lm-studio-download-page.jpg\",\"datePublished\":\"2024-11-13T13:00:52+00:00\",\"description\":\"Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#primaryimage\",\"url\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/run-llm-locally-lm-studio\\\/lm-studio-download-page.jpg\",\"contentUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/run-llm-locally-lm-studio\\\/lm-studio-download-page.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/run-llm-locally-lm-studio\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to Run LLM Locally on Your Computer with LM Studio\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\",\"name\":\"Hongkiat\",\"description\":\"Tech and Design Tips\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\",\"name\":\"Hongkiat.com\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/wp-content\\\/uploads\\\/hkdc-logo-rect-yoast.jpg\",\"contentUrl\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/wp-content\\\/uploads\\\/hkdc-logo-rect-yoast.jpg\",\"width\":1200,\"height\":799,\"caption\":\"Hongkiat.com\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/hongkiatcom\",\"https:\\\/\\\/x.com\\\/hongkiat\",\"https:\\\/\\\/www.pinterest.com\\\/hongkiat\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/person\\\/e7948c7a175d211496331e4b6ce55807\",\"name\":\"Thoriq Firdaus\",\"description\":\"Thoriq is a writer for Hongkiat.com with a passion for web design and development. He is the author of Responsive Web Design by Examples, where he covered his best approaches in developing responsive websites quickly with a framework.\",\"sameAs\":[\"https:\\\/\\\/thoriq.com\",\"https:\\\/\\\/x.com\\\/tfirdaus\"],\"jobTitle\":\"Web Developer\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/author\\\/thoriq\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to Run LLM Locally on Your Computer with LM Studio - Hongkiat","description":"Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/","og_locale":"en_US","og_type":"article","og_title":"How to Run LLM Locally on Your Computer with LM Studio","og_description":"Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing","og_url":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/","og_site_name":"Hongkiat","article_publisher":"https:\/\/www.facebook.com\/hongkiatcom","article_published_time":"2024-11-13T13:00:52+00:00","og_image":[{"url":"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg","type":"","width":"","height":""}],"author":"Thoriq Firdaus","twitter_card":"summary_large_image","twitter_creator":"@tfirdaus","twitter_site":"@hongkiat","twitter_misc":{"Written by":"Thoriq Firdaus","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#article","isPartOf":{"@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/"},"author":{"name":"Thoriq Firdaus","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/person\/e7948c7a175d211496331e4b6ce55807"},"headline":"How to Run LLM Locally on Your Computer with LM Studio","datePublished":"2024-11-13T13:00:52+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/"},"wordCount":713,"publisher":{"@id":"https:\/\/www.hongkiat.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#primaryimage"},"thumbnailUrl":"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg","articleSection":["Desktop"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/","url":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/","name":"How to Run LLM Locally on Your Computer with LM Studio - Hongkiat","isPartOf":{"@id":"https:\/\/www.hongkiat.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#primaryimage"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#primaryimage"},"thumbnailUrl":"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg","datePublished":"2024-11-13T13:00:52+00:00","description":"Running Large Language Models (LLMs) like Llama-3 or Phi-3 typically requires cloud resources and a complicated setup. LM Studio changes this by providing","breadcrumb":{"@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#primaryimage","url":"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg","contentUrl":"https:\/\/assets.hongkiat.com\/uploads\/run-llm-locally-lm-studio\/lm-studio-download-page.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.hongkiat.com\/blog\/run-llm-locally-lm-studio\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hongkiat.com\/blog\/"},{"@type":"ListItem","position":2,"name":"How to Run LLM Locally on Your Computer with LM Studio"}]},{"@type":"WebSite","@id":"https:\/\/www.hongkiat.com\/blog\/#website","url":"https:\/\/www.hongkiat.com\/blog\/","name":"Hongkiat","description":"Tech and Design Tips","publisher":{"@id":"https:\/\/www.hongkiat.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hongkiat.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hongkiat.com\/blog\/#organization","name":"Hongkiat.com","url":"https:\/\/www.hongkiat.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.hongkiat.com\/blog\/wp-content\/uploads\/hkdc-logo-rect-yoast.jpg","contentUrl":"https:\/\/www.hongkiat.com\/blog\/wp-content\/uploads\/hkdc-logo-rect-yoast.jpg","width":1200,"height":799,"caption":"Hongkiat.com"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/hongkiatcom","https:\/\/x.com\/hongkiat","https:\/\/www.pinterest.com\/hongkiat\/"]},{"@type":"Person","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/person\/e7948c7a175d211496331e4b6ce55807","name":"Thoriq Firdaus","description":"Thoriq is a writer for Hongkiat.com with a passion for web design and development. He is the author of Responsive Web Design by Examples, where he covered his best approaches in developing responsive websites quickly with a framework.","sameAs":["https:\/\/thoriq.com","https:\/\/x.com\/tfirdaus"],"jobTitle":"Web Developer","url":"https:\/\/www.hongkiat.com\/blog\/author\/thoriq\/"}]}},"jetpack_featured_media_url":"https:\/\/","jetpack_shortlink":"https:\/\/wp.me\/p4uxU-j0b","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/73047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/users\/113"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/comments?post=73047"}],"version-history":[{"count":2,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/73047\/revisions"}],"predecessor-version":[{"id":73049,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/73047\/revisions\/73049"}],"wp:attachment":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/media?parent=73047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/categories?post=73047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/tags?post=73047"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/topic?post=73047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}