{"id":74180,"date":"2025-10-24T21:00:28","date_gmt":"2025-10-24T13:00:28","guid":{"rendered":"https:\/\/www.hongkiat.com\/blog\/?p=74180"},"modified":"2025-10-01T16:08:16","modified_gmt":"2025-10-01T08:08:16","slug":"docker-llm-setup-guide","status":"publish","type":"post","link":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/","title":{"rendered":"How to Run LLM in Docker"},"content":{"rendered":"<p>Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons to run them locally, including better privacy, lower costs for experimentation, the ability to work offline, and faster testing without waiting on network delays.<\/p>\n<p>But running Large Language Models (LLMs) on your own machine can be a headache as it often involves dealing with complicated setups, hardware-specific issues, and performance tuning.<\/p>\n<p>This is where <a href=\"https:\/\/docs.docker.com\/ai\/model-runner\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Docker Model Runner<\/strong><\/a> comes in. At the time of this writing, it\u2019s currently in Beta, it is designed to simplify everything by packaging LLMs in easy-to-run Docker containers.<\/p>\n<p>Let\u2019s see how it works.<\/p>\n<h2>Requirements<\/h2>\n<p>Requirements differ depending on your operating system. Below are the minimum requirements for running <strong>Docker Model Runner<\/strong>.<\/p>\n<table>\n<tr>\n<th>Operating System<\/th>\n<th>Requirements<\/th>\n<\/tr>\n<tr>\n<td>macOS<\/td>\n<td>\n<ul>\n<li>Docker Desktop 4.40+<\/li>\n<li>Apple Silicon<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr>\n<td>Windows<\/td>\n<td>\n<ul>\n<li>Docker Desktop 4.41+<\/li>\n<li>NVIDIA GPUs with NVIDIA drivers 576.57+<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/table>\n<h2>Enabling Docker Model Runner<\/h2>\n<p>Once you have met the requirements, you can proceed with the installation and setup of <strong>Docker Model Runner<\/strong> with the following command.<\/p>\n<pre>\r\ndocker desktop enable model-runner\r\n<\/pre>\n<p>If you want to allow other apps to connect the Model Runner\u2019s endpoint, you\u2019ll need to enable TCP host access on a port. For example, to use port <code>5000<\/code>:<\/p>\n<pre>\r\ndocker desktop enable model-runner --tcp 5000\r\n<\/pre>\n<p>This will expose the Model Runner\u2019s endpoint on <code>localhost:5000<\/code>. You can change the port number to any other port you prefer or available in your host machine. The API is also OpenAI-compatible, so you can use it with any OpenAI-compatible client.<\/p>\n<h2>Running a Model<\/h2>\n<p>Models are pulled from <a href=\"https:\/\/hub.docker.com\/catalogs\/gen-ai\" target=\"_blank\" rel=\"noopener noreferrer\">Docker Hub<\/a> the first time you use them and will be stored locally, similar to a Docker image.<\/p>\n<figure>\n        <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg\" alt=\"Docker Hub GenAI models catalog\" width=\"1000\" height=\"600\">\n    <\/figure>\n<p>Let\u2019s say we want to run Gemma3, a quite powerful LLM from Google that we can use for various tasks like text generation, summarization, and more. To run it, we first pull the following command:<\/p>\n<pre>\r\ndocker model pull ai\/gemma3\r\n<\/pre>\n<p>Similar to pulling a Docker image, if the version is not specified, it will pull the <strong>latest<\/strong> version or variant. In our case, this would pull the model with 4B parameters and 131K context length. You can adjust the command to pull a different version or variant if needed, such as <code>ai\/gemma3:1B-Q4_K_M<\/code> for the 1B version with quantization.<\/p>\n<p>Alternatively, you can click the <strong>\u201cPull\u201d<\/strong> from the Docker Desktop, and select which version you\u2019d like to pull:<\/p>\n<figure>\n        <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-desktop-pull-ai.jpg\" alt=\"Docker Desktop model pull interface\" width=\"1000\" height=\"600\">\n    <\/figure>\n<p>To run the model, we can use the <code>docker model run<\/code> command. For example, in this case, I\u2019d ask it a question about the first iPhone release date:<\/p>\n<pre>\r\ndocker model run ai\/gemma3 \"When was the first iPhone released?\"\r\n<\/pre>\n<p>Sure enough it returns the correct answer:<\/p>\n<figure>\n        <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-desktop-run-ai.jpg\" alt=\"Gemma3 model answering iPhone question\" width=\"1000\" height=\"600\">\n    <\/figure>\n<h2>Running with Docker Compose<\/h2>\n<p>What\u2019s interesting here is that you can also use and run the models with Docker Compose. So instead of just running a model by itself, you can define the model alongside your other services in your <code>compose.yaml<\/code> file.<\/p>\n<p>For example, assume that we want to run a WordPress site, and we also want to use the Gemma3 model for text generation to allow us to generate draft blog posts and articles quickly within our WordPress. We can arrange our <code>compose.yaml<\/code>, like this:<\/p>\n<pre>\r\nservices:\r\n  app:\r\n    image: wordpress:latest\r\n    models:\r\n      - gemma\r\n      - embedding-model\r\nmodels:\r\n  llm:\r\n    model: ai\/gemma3\r\n<\/pre>\n<p>As mentioned, the Model\u2019s endpoint is accessible both internally within the connected services in the Docker network and externally from your host machine, as shown below.<\/p>\n<table>\n<thead>\n<tr>\n<th style=\"width: 200px;\">Access<\/th>\n<th>Endpoint<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>From Container<\/td>\n<td><code>http:\/\/model-runner.docker.internal\/engines\/v1<\/code><\/td>\n<\/tr>\n<tr>\n<td>From Host machine<\/td>\n<td><code>http:\/\/localhost:5000\/engines\/v1<\/code>, assuming you set the tcp port to <code>5000<\/code><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Since the endpoint is OpenAI-compatible, you can use it with any OpenAI-compatible client <a href=\"https:\/\/platform.openai.com\/docs\/libraries\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">such as the official SDK libraries<\/a>. For example, below is how we could use it with the OpenAI JavaScript SDK.<\/p>\n<pre>\r\nimport OpenAI from \"openai\";\r\nconst client = new OpenAI({\r\n  apiKey: \"\",\r\n  baseURL: \"http:\/\/localhost:5000\/engines\/v1\",\r\n});\r\n\r\nconst response = await client.responses.create({\r\n    model: \"ai\/gemma3\",\r\n    input: \"When was the first iPhone released?\"\r\n});\r\n\r\nconsole.log(response.output_text);\r\n<\/pre>\n<p>And that\u2019s it! You can now run LLMs in Docker with ease, and use them in your applications.<\/p>\n<h2>Wrapping up<\/h2>\n<p><strong>Docker Model Runner<\/strong> is a powerful tool that simplifies the process of running Large Language Models locally. It abstracts away the complexities of setup and configuration, especially if you\u2019re working with multiple models, services and team. So you and your team can focus on building applications without worrying much on the underlying setup or configuration.<\/p>","protected":false},"excerpt":{"rendered":"<p>Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons to run them locally, including better privacy, lower costs for experimentation, the ability to work offline, and faster testing without waiting on network delays. But running Large Language Models (LLMs)&hellip;<\/p>\n","protected":false},"author":113,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3397],"tags":[3545],"topic":[],"class_list":["entry-content","is-maxi"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.8 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>How to Run LLM in Docker - Hongkiat<\/title>\n<meta name=\"description\" content=\"Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Run LLM in Docker\" \/>\n<meta property=\"og:description\" content=\"Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/\" \/>\n<meta property=\"og:site_name\" content=\"Hongkiat\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/hongkiatcom\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-24T13:00:28+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg\" \/>\n<meta name=\"author\" content=\"Thoriq Firdaus\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@tfirdaus\" \/>\n<meta name=\"twitter:site\" content=\"@hongkiat\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Thoriq Firdaus\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/\"},\"author\":{\"name\":\"Thoriq Firdaus\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/person\\\/e7948c7a175d211496331e4b6ce55807\"},\"headline\":\"How to Run LLM in Docker\",\"datePublished\":\"2025-10-24T13:00:28+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/\"},\"wordCount\":645,\"publisher\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/docker-llm-setup-guide\\\/docker-genai-catalog.jpg\",\"keywords\":[\"Artificial Intelligence\"],\"articleSection\":[\"Desktop\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/\",\"name\":\"How to Run LLM in Docker - Hongkiat\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/docker-llm-setup-guide\\\/docker-genai-catalog.jpg\",\"datePublished\":\"2025-10-24T13:00:28+00:00\",\"description\":\"Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#primaryimage\",\"url\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/docker-llm-setup-guide\\\/docker-genai-catalog.jpg\",\"contentUrl\":\"https:\\\/\\\/assets.hongkiat.com\\\/uploads\\\/docker-llm-setup-guide\\\/docker-genai-catalog.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/docker-llm-setup-guide\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to Run LLM in Docker\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\",\"name\":\"Hongkiat\",\"description\":\"Tech and Design Tips\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#organization\",\"name\":\"Hongkiat.com\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/wp-content\\\/uploads\\\/hkdc-logo-rect-yoast.jpg\",\"contentUrl\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/wp-content\\\/uploads\\\/hkdc-logo-rect-yoast.jpg\",\"width\":1200,\"height\":799,\"caption\":\"Hongkiat.com\"},\"image\":{\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/hongkiatcom\",\"https:\\\/\\\/x.com\\\/hongkiat\",\"https:\\\/\\\/www.pinterest.com\\\/hongkiat\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/#\\\/schema\\\/person\\\/e7948c7a175d211496331e4b6ce55807\",\"name\":\"Thoriq Firdaus\",\"description\":\"Thoriq is a writer for Hongkiat.com with a passion for web design and development. He is the author of Responsive Web Design by Examples, where he covered his best approaches in developing responsive websites quickly with a framework.\",\"sameAs\":[\"https:\\\/\\\/thoriq.com\",\"https:\\\/\\\/x.com\\\/tfirdaus\"],\"jobTitle\":\"Web Developer\",\"url\":\"https:\\\/\\\/www.hongkiat.com\\\/blog\\\/author\\\/thoriq\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to Run LLM in Docker - Hongkiat","description":"Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/","og_locale":"en_US","og_type":"article","og_title":"How to Run LLM in Docker","og_description":"Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons","og_url":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/","og_site_name":"Hongkiat","article_publisher":"https:\/\/www.facebook.com\/hongkiatcom","article_published_time":"2025-10-24T13:00:28+00:00","og_image":[{"url":"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg","type":"","width":"","height":""}],"author":"Thoriq Firdaus","twitter_card":"summary_large_image","twitter_creator":"@tfirdaus","twitter_site":"@hongkiat","twitter_misc":{"Written by":"Thoriq Firdaus"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#article","isPartOf":{"@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/"},"author":{"name":"Thoriq Firdaus","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/person\/e7948c7a175d211496331e4b6ce55807"},"headline":"How to Run LLM in Docker","datePublished":"2025-10-24T13:00:28+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/"},"wordCount":645,"publisher":{"@id":"https:\/\/www.hongkiat.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#primaryimage"},"thumbnailUrl":"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg","keywords":["Artificial Intelligence"],"articleSection":["Desktop"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/","url":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/","name":"How to Run LLM in Docker - Hongkiat","isPartOf":{"@id":"https:\/\/www.hongkiat.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#primaryimage"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#primaryimage"},"thumbnailUrl":"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg","datePublished":"2025-10-24T13:00:28+00:00","description":"Large Language Models (LLMs) have changed how we build and use software. While cloud-based LLM APIs are great for convenience, there are plenty of reasons","breadcrumb":{"@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#primaryimage","url":"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg","contentUrl":"https:\/\/assets.hongkiat.com\/uploads\/docker-llm-setup-guide\/docker-genai-catalog.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.hongkiat.com\/blog\/docker-llm-setup-guide\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hongkiat.com\/blog\/"},{"@type":"ListItem","position":2,"name":"How to Run LLM in Docker"}]},{"@type":"WebSite","@id":"https:\/\/www.hongkiat.com\/blog\/#website","url":"https:\/\/www.hongkiat.com\/blog\/","name":"Hongkiat","description":"Tech and Design Tips","publisher":{"@id":"https:\/\/www.hongkiat.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hongkiat.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hongkiat.com\/blog\/#organization","name":"Hongkiat.com","url":"https:\/\/www.hongkiat.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.hongkiat.com\/blog\/wp-content\/uploads\/hkdc-logo-rect-yoast.jpg","contentUrl":"https:\/\/www.hongkiat.com\/blog\/wp-content\/uploads\/hkdc-logo-rect-yoast.jpg","width":1200,"height":799,"caption":"Hongkiat.com"},"image":{"@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/hongkiatcom","https:\/\/x.com\/hongkiat","https:\/\/www.pinterest.com\/hongkiat\/"]},{"@type":"Person","@id":"https:\/\/www.hongkiat.com\/blog\/#\/schema\/person\/e7948c7a175d211496331e4b6ce55807","name":"Thoriq Firdaus","description":"Thoriq is a writer for Hongkiat.com with a passion for web design and development. He is the author of Responsive Web Design by Examples, where he covered his best approaches in developing responsive websites quickly with a framework.","sameAs":["https:\/\/thoriq.com","https:\/\/x.com\/tfirdaus"],"jobTitle":"Web Developer","url":"https:\/\/www.hongkiat.com\/blog\/author\/thoriq\/"}]}},"jetpack_featured_media_url":"https:\/\/","jetpack_shortlink":"https:\/\/wp.me\/p4uxU-jis","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/74180","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/users\/113"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/comments?post=74180"}],"version-history":[{"count":1,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/74180\/revisions"}],"predecessor-version":[{"id":74181,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/posts\/74180\/revisions\/74181"}],"wp:attachment":[{"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/media?parent=74180"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/categories?post=74180"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/tags?post=74180"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/www.hongkiat.com\/blog\/wp-json\/wp\/v2\/topic?post=74180"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}