First extract the url of images
craw.py

1
2
3
4
5
6
7
8
9
10
import urllib2, re
req = urllib2.Request('http://yourwebsite.com/path/to/webpage')
website = urllib2.urlopen(req)

html = website.read()

# Read all png files
imgs = re.findall('"((http)s?://.*?.png)"', text)
for i in imgs:
print i[0]

Then output the URLs to a text file

1
$ python craw.py > content.txt

Use shell script to download it
craw.sh

1
2
3
4
5
6
#!/bin/bash
file="./content.txt"
while IFS= read -r line
do
wget "$line"
done <"$file"

You’re done!

References: