Getting data from the internet¶
We've seen about obtaining data from our local file system.
The other common place today that we might want to obtain data is from the internet.
It's very common today to treat the web as a source and store of information; we need to be able to programmatically download data, and place it in Python objects.
We may also want to be able to programmatically upload data, for example, to automatically fill in forms.
This can be really powerful if we want to, for example, do automated meta-analysis across a selection of research papers.
Uniform resource locators¶
All internet resources are defined by a uniform resource locator (URL) which are a particular type of uniform resource identifier (URI). For example
"https://mt0.google.com:443/vt?x=658&y=340&z=10&lyrs=s"
A URL consists of:
- A scheme (
http
hypertext transfer protocol,https
hypertext transfer protocol secure ,ssh
secure shell, ...) - A host (
mt0.google.com
, the name of the remote computer you want to talk to) - A port (optional, most protocols have a typical port associated with them, e.g. 80 for HTTP, 443 for HTTPS)
- A path (analogous to a file path on the machine, here it is just
vt
) - A query part after a ?, (optional, usually ampersand
&
separated parameters e.g.x=658
orz=10
)
Supplementary materials: These can actually be different for different protocols, the above is a simplification, you can see more, for example, at the Wikipedia article on URIs.
URLs are not allowed to include all characters; we need to, for example, escape a space that appears inside the URL, replacing it with %20
, so e.g. a request of http://some example.com/
would need to be http://some%20example.com/
.
Supplementary materials: The code used to replace each character is the ASCII code for it.
Supplementary materials: The escaping rules are quite subtle. See the Wikipedia article on percent-encoding. The standard library provides the urlencode function that can take care of this for you.
Requests¶
The Python Requests library can help us manipulate URLs and requesting the content associated with them. It is easier to use than the urllib
library that is part of the standard library, and is included with Anaconda and Canopy. It sorts out escaping, parameter encoding, and so on for us.
# sending requests to the web is not fully supported on jupyterlite yet, and the
# cells below might error out on the browser (jupyterlite) version of this notebook
import requests
To request the above URL, for example, we write:
response = requests.get(
url="https://mt0.google.com:443/vt",
params={'x': 658, 'y': 340, 'lyrs': 's', 'z': 10}
)
The returned object is a instance of the requests.Response
class
response
isinstance(response, requests.Response)
The Response
class defines various useful attributes associated with the responses, for example we can check the status code for our request with a value of 200 indicating a successful request
response.status_code
We can also more directly check if the response was successful or not with the boolean Response.ok
attribute
response.ok
We can get the URL that was requested using the Response.url
attribute
response.url
When we do a request, the associated response content, accessible at the Response.content
attribute, is returned as bytes. For the JPEG image in the above, this isn't very readable:
type(response.content)
response.content[:10]
We can also get the content as a string using the Response.content
attribute, though this is even less readable here as some of the returned bytes do not have corresponding character encodings
type(response.text)
response.text[:10]
To get a more useful representation of the data, we will therefore need to process the content we get using a Python function which understands the byte-encoding of the corresponding file format.
Again, it is important to separate the transport model, (e.g. a file system, or a HTTP request for the web), from the data model of the data that is returned.
Example: sunspots¶
Let's try to get something scientific: the sunspot cycle data from the Sunspot Index and Long-term Solar Observations website
spots = requests.get('http://www.sidc.be/silso/INFO/snmtotcsv.php').text
spots[-100:]
This looks like semicolon-separated data, with different records on different lines. Line separators come out as \n
which is the escape-sequence corresponding a newline character in Python.
There are many many scientific datasets which can now be downloaded like this - integrating the download into your data pipeline can help to keep your data flows organised.
Writing our own parser¶
We'll need a Python library to handle semicolon-separated data like the sunspot data.
You might be thinking: "But I can do that myself!":
lines = spots.split("\n")
lines[0:5]
years = [line.split(";")[0] for line in lines]
years[0:15]
But don't: what if, for example, one of the records contains a separator inside it; most computers will put the content in quotes, so that, for example,
"Something; something"; something; something
has three fields, the first of which is
Something; something
Our naive code above would however not correctly parse this input:
'"Something; something"; something; something'.split(';')
You'll never manage to get all that right; so you'll be better off using a library to do it.
Writing data to the internet¶
Note that we're using requests.get
. get
is used to receive data from the web.
You can also use post
to fill in a web-form programmatically.
Supplementary material: Learn about using post
with Requests.
Supplementary material: Learn about the different kinds of HTTP request: Get, Post, Put, Delete...
This can be used for all kinds of things, for example, to programmatically add data to a web resource. It's all well beyond our scope for this course, but it's important to know it's possible, and start to think about the scientific possibilities.