COMP0233: Research Software Engineering With Python


Getting data from the Internet

We've seen about obtaining data from our local file system.

The other common place today that we might want to obtain data is from the internet.

It's very common today to treat the web as a source and store of information; we need to be able to programmatically download data, and place it in Python objects.

We may also want to be able to programmatically upload data, for example, to automatically fill in forms.

This can be really powerful if we want to, for example, perform an automated meta-analysis across a selection of research papers.


All internet resources are defined by a Uniform Resource Locator.

In [1]:

A url consists of:

  • A scheme (http, https, ssh, ...)
  • A host (static-maps.yandex.ru, the name of the remote computer you want to talk to)
  • A port (optional, most protocols have a typical port associated with them, e.g. 80 for http, 443 for https)
  • A path (Like a file path on the machine, here it is 1.x/)
  • A query part after a ?, (optional, usually ampersand-separated parameters e.g. size=400x400, or z=10)

Supplementary materials: These can actually be different for different protocols, the above is a simplification. You can see more, for example, at the wikipedia article about the URI scheme.

URLs are not allowed to include all characters; we need to, for example, "escape" a space that appears inside the URL, replacing it with %20, so e.g. a request of http://some example.com/ would need to be http://some%20example.com/

Supplementary materials: The code used to replace each character is the ASCII code for it.

Supplementary materials: The escaping rules are quite subtle. See the wikipedia article for more detail. The standard library provides the urlencode function that can take care of this for you.


The python requests library can help us manage and manipulate URLs. It is easier to use than the urllib library that is part of the standard library, and is included with anaconda and canopy. It sorts out escaping, parameter encoding, and so on for us.

To request the above URL, for example, we write:

In [2]:
import requests
In [3]:
response = requests.get("https://static-maps.yandex.ru/1.x/?size=400,400&ll=-0.1275,51.51&z=10&l=sat&lang=en_US",
                            'size': '400,400',
                            'll': '-0.1275,51.51',
                            'zoom': 10,
                            'l': 'sat',
                            'lang': 'en_US'
In [4]:

When we do a request, the result comes back as text. For the png image in the above, this isn't very readable.

Just as for file access, therefore, we will need to send the text we get to a python module which understands that file format.

Again, it is important to separate the transport model (e.g. a file system, or an "http request" for the web) from the data model of the data that is returned.

Example: Sunspots

Let's try to get something scientific: the sunspot cycle data from SILSO:

In [5]:
spots = requests.get('http://www.sidc.be/silso/INFO/snmtotcsv.php').text
In [6]:
'1749;01;1749.042;  96.7; -1.0;   -1;1\n1749;02;1749.123; 104.3; -1.0;   -1;1\n1749'

This looks like semicolon-separated data, with different records on different lines. (Line separators come out as \n)

There are many many scientific datasets which can now be downloaded like this - integrating the download into your data pipeline can help to keep your data flows organised.

Writing our own Parser

We'll need a python library to handle semicolon-separated data like the sunspot data.

You might be thinking: "But I can do that myself!":

In [7]:
lines = spots.split("\n")
['1749;01;1749.042;  96.7; -1.0;   -1;1',
 '1749;02;1749.123; 104.3; -1.0;   -1;1',
 '1749;03;1749.204; 116.7; -1.0;   -1;1',
 '1749;04;1749.288;  92.8; -1.0;   -1;1',
 '1749;05;1749.371; 141.7; -1.0;   -1;1']
In [8]:
years = [line.split(";")[0] for line in lines]
In [9]:

But don't: what if, for example, one of the records contains a separator inside it; most computers will put the content in quotes, so that, for example,

"something; something"; something; something

has three fields, the first of which is

something; something

The naive code above would give four fields, of which the first is


You'll never manage to get all that right; so you'll be better off using a library to do it.

Writing data to the internet

Note that we're using requests.get. get is used to receive data from the web. You can also use post to fill in a web-form programmatically.

Supplementary material: Learn about using post with requests.

Supplementary material: Learn about the different kinds of http request: Get, Post, Put, Delete...

This can be used for all kinds of things, for example, to programmatically add data to a web resource. It's all well beyond our scope for this course, but it's important to know it's possible, and start to think about the scientific possibilities.