Learn how to download multiple URLs with a single curl command, using a simple bash script and a text file with the list of URLs.
Curl is a popular command-line tool for transferring data over various protocols. It can be used to download files from the web, as well as upload, post, or delete data. But what if you want to download multiple files from different URLs with a single curl command? How can you do that without typing each URL separately or creating a long and complex command?
In this article, we will show you how to download multiple URLs with a single curl command, using a simple bash script and a text file with the list of URLs. This method is useful if you have a large number of URLs to download, or if you want to automate the process of downloading files from the web. We will also explain how the script works and how you can customize it to suit your needs.
Table of Contents
- Prerequisites
- The Bash Script
- How the Script Works
- How to Customize the Script
- Frequently Asked Questions (FAQs)
- Question: How can I download multiple URLs with a single curl command on Windows?
- Question: How can I download multiple URLs with a single curl command in parallel?
- Question: How can I download multiple URLs with a single curl command and rename the files?
- Summary
Prerequisites
Before we start, you will need the following:
- A Linux or macOS system with curl installed. You can check if you have curl by typing curl –version in your terminal. If you don’t have curl, you can install it with your package manager, such as apt or brew.
- A text file with the list of URLs that you want to download. Each URL should be on a separate line, and the file should have a .txt extension. For example, you can create a file called urls.txt with the following content:
https://example.com/file1.zip
https://example.com/file2.pdf
https://example.com/file3.jpg
- A directory where you want to save the downloaded files. You can use any directory that you have write permission to, such as your home directory or a subdirectory. For example, you can create a directory called downloads with the command mkdir downloads.
The Bash Script
The bash script that we will use to download multiple URLs with a single curl command is very simple. It consists of only four lines of code:
#!/bin/bash
while read url; do
curl -O "$url"
done < urls.txt
Let’s save this script as download.sh in the same directory where we have the urls.txt file and the downloads directory. Then, let’s make the script executable with the command chmod +x download.sh.
To run the script, we just need to type ./download.sh in the terminal, and it will start downloading the files from the URLs in the urls.txt file. The downloaded files will be saved in the current directory, with the same names as the original files.
How the Script Works
The script works by using a while loop to read each line from the urls.txt file and assign it to a variable called url. Then, it uses the curl -O command to download the file from the url and save it with the same name. The -O option tells curl to use the remote file name as the local file name. The loop repeats until it reaches the end of the urls.txt file.
How to Customize the Script
The script that we have shown you is very basic and does not have any error handling or progress indication. However, you can easily customize it to add more features or options. Here are some examples of how you can modify the script:
- To change the directory where the files are saved, you can use the -o option instead of the -O option, and specify the output file name with the directory path. For example, you can change the curl -O “$url” line to curl -o “downloads/$(basename “$url”)” “$url”, and it will save the files in the downloads directory, with the same names as the original files. The basename command extracts the file name from the URL.
- To show the progress of the download, you can use the -# option, which displays a progress bar instead of the default statistics. For example, you can change the curl -O “$url” line to curl -# -O “$url”, and it will show a progress bar like this:
############## 23.5%
- To handle errors or failures, you can use the -f option, which tells curl to fail silently and return a non-zero exit code if the download fails. You can then use an if statement to check the exit code and print an error message or perform some other action. For example, you can change the curl -O “$url” line to curl -f -O “$url”, and then add the following lines after it:
if [ $? -ne 0 ]; then
echo "Download failed: $url"
fi
This will print an error message if the download fails, such as:
Download failed: https://example.com/file4.zip
Frequently Asked Questions (FAQs)
Here are some frequently asked questions about downloading multiple URLs with a single curl command:
Question: How can I download multiple URLs with a single curl command on Windows?
Answer: You can use the same method as described in this article, but you will need to install a bash shell on Windows, such as Git Bash or Cygwin. Alternatively, you can use a PowerShell script instead of a bash script, and use the Invoke-WebRequest cmdlet instead of the curl command.
Question: How can I download multiple URLs with a single curl command in parallel?
Answer: You can use the -Z option, which tells curl to perform the downloads in parallel, using multiple connections. You will also need to use the -K option, which tells curl to read the URLs from a file, and use one URL per line. For example, you can use the following command to download the URLs in the urls.txt file in parallel:
curl -Z -K urls.txt
Note that this method does not use a bash script, and it will save the files in the current directory, with the same names as the original files.
Question: How can I download multiple URLs with a single curl command and rename the files?
Answer: You can use the -o option, and specify the output file name with a pattern that includes variables, such as the URL, the host name, the file name, or the file extension. For example, you can use the following command to download the URLs in the urls.txt file and rename the files with the format host-file.extension:
curl -K urls.txt -o "#1-#2.#3"
This will use the first, second, and third segments of the URL as the output file name. For example, the URL https://example.com/file1.zip will be saved as example-file1.zip.
Summary
In this article, we have shown you how to download multiple URLs with a single curl command, using a simple bash script and a text file with the list of URLs. We have also explained how the script works and how you can customize it to add more features or options. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.
Disclaimer: This article is for educational purposes only and does not constitute professional advice. The author and the publisher are not liable for any damages or losses that may result from the use of the information or code in this article. Always test the code before using it in a production environment.