Download Wikipedia pages using API (in Python)

Web, Mobile & Software Dev Web Development Posted 3 years ago

Fixed Price

Budget: $50
Delivery by June 7, 2013

Start Date

May 7, 2013


I have a list of 644 Wikipedia pages which you must process using python. (csv file is attached)

For each page you must provide me with :

1. a XML file with a list of all "revision" ids

2. for each revision, download the relevant Wikipedia page and store it as a HTML document

The final output is organized as follows:

1. a folder for each wikipedia page (eg: ./Pope_John_Paul_II/)

2. an xml file (Pope_John_Paul_II.xml) which contains a list of revisions in that folder. (use the following API:

3. a revisions folder with a html file named after each revision id (eg: ./Pope_John_Paul_II/revisions/552708734.html) which is downloaded from Wikipedia's mobile version -- in this case

4. Python scripts for this entire process, which I will use to replicate your data collection process.

  • Other Skills:

Activity on this Job

Last Viewed by Client: 3 years ago

Invites Sent: 0

Unanswered Invites: 0

Hired: 1

About the Client

(5.00) 12 reviews

United States
Menlo Park 02:07 PM

34 Jobs Posted
68% Hire Rate, 1 Open Job

$2,317 Total Spent
25 Hires, 4 Active

$15.90/hr Avg Hourly Rate Paid
125 Hours

Member Since Oct 23, 2012