Shakeel Gavioli-Akilagun, s.a.gavioli-akilagun@lse.ac.uk, Department of Statistics. Office hours: Monday 9:00 - 10:00 AM, COL 8.05.
Please use LSE Student Hub to book slots for office hours at least two hours in advance. Please notice that Shakeel’s office hours are specialized to coding questions, homework problems and seminar class materials.
To access the course materials, please fill in the form here. Once we have your GitHub account information, we will add you to our team so that you can have access to the course repository.
Please see the Moodle page for detailed information.
Project assignment (80%) and continuous (summative) assessment in weeks 3, 5, 8, 10 (5% each). Students will be expected to produce a total of 10 problem sets (formative and summative).
We will use GitHub classroom to manage our homeworks. Please find the instruction here.
Week | Topic | Week | Topic |
---|---|---|---|
1 | Introduction to Data | 7 | Exploratory data analysis |
2 | Python and NumPy Data Structures | 8 | Matrix data visualization |
3 | Wrangling Data with Pandas | 9 | Model evaluation |
4 | Creating and Managing Databases | 10 | Dimensionality reduction |
5 | Collecting Data from the Internet | 11 | Graph data visualization |
6 | Reading Week |
In this course you will learn to carry out a full data science project cycle going from data acquisition to reporting with conclusions and visualisations. To do this you will use a range of Python modules: NumPy and Pandas for data cleaning and wrangling, matplotlib and Seaborn for visualisation. We shall emphasise the use of visualisation to check for the integrity of data. Techniques for data acquisition using web-scraping and Application Programming Interfaces (APIs) will be introduced as will the principles behind the use of databases. After Reading Week we will cover some more advanced topics concerned with visualising higher dimensional data sets. These techniques include the use of clustering and dimension reduction algorithms from the scikit-learn module. Finally we will look at using graphs so as to visualise relationships. Throughout, we shall be using Jupyter and Anaconda Python.
For the final project, you will be expected to find a dataset in order to produce a report with visualizations and conclusions. The report should demonstrate the use of the techniques taught in the course.
This course is an introduction to the fundamental concepts of data and data visualization and assumes no prior knowledge of these concepts.
The course will involve 20 hours of lectures and 15 hours of computer workshops in the AT.
No prior experience with programming is required. However, students are advised to complete the Python for Statistics Pre-sessional Course (available on Moodle).
We will use some tools, notably SQLite and Python, but these will be used in coordination with MY470 (Computer Programming) where their use will be covered more formally. Lectures and assignments will be posted on Github, Students are expected to use Github also to submit problem sets.
Where appropriate, we will use Jupyter notebooks for lab assignments, demonstrations, and the course notes themselves.
In the first week, we will give an overview of the course. As the course relies fundamentally on GitHub, a collaborative code and data sharing platform, we will introduce the use of git and GitHub, using the lab session to guide students through setting up an account and subscribing to the course organisation and assignments.
Lecture Notes:
Readings:
Further Readings:
Lab: Working with Ipython notebook and Git.
Firtly, we shall review fundamental Python datatypes such as lists and dicts. Then we shall introduce Numerical Python or NumPy which is the module on which Pandas is built. NumPy permits fast array based computation and is the basis for efficient pre-processing and visualisation of data. Many of the built-in NumPy methods can be used in Exploratory Data Analysis (EDA). We will also cover ways to restructure data from “wide” to “long” format, within strictly rectangular data structures. Additional topics concerning text encoding, date formats, and sparse matrix formats are also covered.
Readings:
Further Resources:
Lab: Control flow in Python
This week we shall be exploring Pandas which is one of Python’s main tools. It gives Python a DataFrame similar to the other main data science language R. Pandas can handle heterogeneous data and so it extends the capability of NumPy, which is mostly suited to homogeneous numerical data. Pandas works well with other key Python modules such as scikit-learn (machine learning) and matplotlib. We will also cover common data formats such as JSON (Javascript Object Notation).
Readings:
Lab: More on pandas
We will return to database normalization, and how to implement this using good practice in a relational database manager, SQLite. We will cover how to structure data, verify data types, set conditions for data integrity, and perform complex queries to extract data from the database.
Readings
Lab: Classes in Python
Publicly accessible application programming interfaces (APIs) provide a common source of “big” data available from a variety of sources, such as social media data. This data consists of a variety of data types, but is usually transmitted in JSON format. In this session, we will cover the basics of APIs, including authentication and the use of protocols for interacting with APIs, and in processing the data that is obtained using these methods. We will also discuss common problems in using text, including character encodings, working with Unicode, transforming text into numeric data, and cleaning textual data for analysis.We will cover basic web scraping, to turn web data into text or numbers.
Readings:
Further Resources:
Lab: More on web scraping and API
We will introduce the basic statistical plots that are commonly used in exploratory data analysis. We will first consider standard plots for univariate data analysis, including histograms, empirical distribution functions, as well as plots of summary statistics such as boxplots and violinplots. We will then consider different variants of bar plots, which are commonly used for comparison of parallel batches of data, as well as scatter plots for exploration of correlation patterns in data.
Readings:
Lab: Matplotlib primer and basic statistical plots
We will consider how to visualize matrix data such as covariance and other similarity matrices and adjacency matrices of graphs such as those representing social networks. The key here is to use a suitable ordering of matrix rows and columns to visualize any possibly existing clustering structure. We will explain the underlying methods based on spectral theory of matrices, using the concepts of matrix eigenvectors and clustering based on matrix eigenvectors. In particular, we will explain the method based on seriation using the so-called Fiedler eigenvector and spectral co-clustering based on using eigenvectors in combination with k-means clustering method.
Readings:
Lab: Statistical plots using Matplotlib and Seaborn
In this week, we will introduce standard statistical plots for the performance evaluation of statistical models and machine learning algorithms for classification. We will introduce standard statistical plots for assessing the performance of binary classifiers, such as receiver operating characteristic (ROC) and precision-recall (PR) curves. We will learn how to interpret these plots and discuss their advantages and limitations.
We will also discuss various standard metrics used for assessing the performance of binary classifiers, such as accuracy, area under the curve (AUC) and Gini coefficient, discuss their relation to the ROC curve, as well as their advantages and limitations.
Readings:
Lab: Evaluating classifiers using sklearn.metrics
We will consider how to visualize hidden structures in high-dimensional data, such as hidden clusters or embedded low-dimensional manifolds, by using dimensionality reduction methods. We will explain the underlying principles of dimensionality reduction methods such as multidimensional scaling, locally linear embedding, isomap, spectral embedding, and stochastic neighbor embedding. We will see how the geometry, linear algebra and optimisation methods give raise to different dimensionality reduction methods.
Our focus will be on the dimensionality methods that are commonly used in practice and widely available through software libraries such as sklearn.manifold. We will also consider modern tools for visualizing different dimensionality reductions such as Google embedding projector.
Readings:
Lab: Dimensionality reduction using sklearn.manifold