How to Use wget in Linux

Learn how to use wget in Linux to download files, websites, and more with ease.

Unlike the curl command, wget is typically used to download entire websites or specific files from a server to the user’s local system, and it does this recursively by default. This means it can navigate through a website, following links to download the entirety of a website, including text and media files.

Here are some different ways to use the wget command:

Basic Usage

The most straightforward way to use wget is to type wget followed by the URL of the file you want to download.

wget http://example.com/file.zip
1. Specify Download Directory

If you want to download the file to a specific directory, you can use the -P option followed by the path to the directory.

wget -P /path/to/directory http://example.com/file.zip
2. Download in the Background

If you want to download a large file, you might want to move the download to the background. You can do this with the -b option.

wget -b http://example.com/large-file.zip
3. Resume an Interrupted Download

If a download gets interrupted, you can continue it with the -c option.

wget -c http://example.com/large-file.zip
4. Limit Download Speed

If you don’t want wget to use all available network bandwidth, you can limit the download speed with the --limit-rate option.

wget --limit-rate=200k http://example.com/file.zip
5. Download Multiple Files

If you want to download multiple files, you can specify them all at once.

wget http://example.com/file1.zip http://example.com/file2.zip

Alternatively, you can put all URLs in a file (one URL per line) and use the -i option.

wget -i urls.txt
6. Download a Full Website

If you want to download a full website for offline viewing, you can use the -r (or --recursive) option.

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --no-parent http://www.example.com

This command will download the entire www.example.com website. The options used in this command have the following roles:

  • --recursive: download the entire Web site.
  • --domains example.com: don’t follow links outside example.com.
  • --no-parent: don’t follow links outside the directory tutorials/html/.
  • --page-requisites: get all the elements that compose the page (images, CSS and so on).
  • --html-extension: save files with the .html extension.
  • --convert-links: convert links so that they work locally, off-line.
  • --restrict-file-names=windows: modify filenames so that they will work in Windows as well.
  • --no-clobber: don’t overwrite any existing files (used in case the download is interrupted and resumed).
7. Download Files from an FTP Server

You can use wget to download files from an FTP server. If a username and password are required, use the format: ftp://user:password@server/path.

wget ftp://user:password@ftp.example.com/file.zip
More Linux commands:
Directory Operations rmdir · cd · pwd · exa · ls
File Operations cat · cp · dd · less · touch · ln · rename · more · head
File System Operations chown · mkfs · locate
Networking ping · curl · wget · iptables · mtr
Search and Text Processing find · grep · sed · whatis · ripgrep · fd · tldr
System Information and Management env · history · top · who · htop · glances · lsof
User and Session Management screen · su · sudo · open
WebsiteFacebookTwitterInstagramPinterestLinkedInGoogle+YoutubeRedditDribbbleBehanceGithubCodePenWhatsappEmail