Download Wikipedia pages using API (in Python)

Web, Mobile & Software Dev Web Development Posted 2 years ago

Fixed Price

Delivery by June 7, 2013




I have a list of 644 Wikipedia pages which you must process using python. (csv file is attached)

For each page you must provide me with :

1. a XML file with a list of all "revision" ids

2. for each revision, download the relevant Wikipedia page and store it as a HTML document

The final output is organized as follows:

1. a folder for each wikipedia page (eg: ./Pope_John_Paul_II/)

2. an xml file (Pope_John_Paul_II.xml) which contains a list of revisions in that folder. (use the following API:

3. a revisions folder with a html file named after each revision id (eg: ./Pope_John_Paul_II/revisions/552708734.html) which is downloaded from Wikipedia's mobile version -- in this case

4. Python scripts for this entire process, which I will use to replicate your data collection process.

Open Attachment

Skills Required:

Client Activity on this Job

Last Viewed: 2 years ago

Proposals: 6

Hired: 1

About the Client

(5.00) 12 reviews

United States
Somerville 03:30 AM

32 Jobs Posted
66% Hire Rate, 1 Open Job

$796 Total Spent
23 Hires, 3 Active

$5.59/hr Avg Hourly Rate Paid
59 Hours

Member Since Oct 23, 2012