lse-st445.github.io

LSE

ST445 Managing and Visualizing Data

Michaelmas Term 2023

Instructors

Teachers/GTAs

Course Information

Assessment

Project assignment (80%) and continuous assessment in weeks 3, 5, 8, 10 (5% each). Students will be expected to produce 10 problem sets in the AT.

We will use GitHub classroom to manage our homeworks. Please find the instruction here.

Week Topic Week Topic
1 Introduction to Data 7 Exploratory data analysis
2 Python and NumPy Data Structures 8 Matrix data visualization
3 Wrangling Data with Pandas 9 Model evaluation
4 Creating and Managing Databases 10 Dimensionality reduction
5 Collecting Data from the Internet 11 Graph data visualization
6 Reading Week    

Course Description

In this course you will learn to carry out a full data science project cycle going from data acquisition to reporting with conclusions and visualisations. To do this you will use a range of Python modules: NumPy and Pandas for data cleaning and wrangling, matplotlib and Seaborn for visualisation. We shall emphasise the use of visualisation to check for the integrity of data. Techniques for data acquisition using web-scraping and Application Programming Interfaces (APIs) will be introduced as will the principles behind the use of databases. After Reading Week we will cover some more advanced topics concerned with visualising higher dimensional data sets. These techniques include the use of clustering and dimension reduction algorithms from the scikit-learn module. Finally we will look at using graphs so as to visualise relationships. Throughout, we shall be using Jupyter and Anaconda Python.

For the final project, you will be expected to find a dataset in order to produce a report with visualizations and conclusions. The report should demonstrate the use of the techniques taught in the course.

Organization

This course is an introduction to the fundamental concepts of data and data visualization for students and assumes no prior knowledge of these concepts.

The course will involve 20 hours of lectures and 15 hours of computer workshops in the AT.

Prerequisites

No prior experience with programming is required.

Software

We will use some tools, notably SQLite and Python, but these will be used in coordination with MY470 (Computer Programming) where their use will be covered more formally. Lectures and assignments will be posted on Github, Students are expected to use Github also to submit problem sets.

Where appropriate, we will use Jupyter notebooks for lab assignments, demonstrations, and the course notes themselves.

Schedule


Week 1. Introduction to Data

In the first week, we will introduce the basic concepts of the course, including how data is recorded, stored, and shared. Because the course relies fundamentally on GitHub, a collaborative code and data sharing platform, we will introduce the use of git and GitHub, using the lab session to guide students through in setting up an account and subscribing to the course organisation and assignments.

This week will also introduce basic data types from the perspective of machine implementations through to high-level programming languages. A short historical perspective to data science will be given. Issues concerning data integrity will be discussed. The process flow of capturing, wrangling, exploring and visualising data will be emphasised. We will introduce the notion of databases and database managers.

Lecture Notes:

Readings:

Further Readings:

Lab: Working with Ipython notebook and GitHub.


Week 2. Python and NumPy Data Structures

Firtly, we shall review fundamental Python datatypes such as lists and dicts. Then we shall introduce Numerical Python or NumPy which is the module on which Pandas is built. NumPy permits fast array based computation and is the basis for efficient pre-processing and visualisation of data. Many of the built-in NumPy methods can be used in Exploratory Data Analysis (EDA). We will also cover ways to restructure data from “wide” to “long” format, within strictly rectangular data structures. Additional topics concerning text encoding, date formats, and sparse matrix formats are also covered.

Readings:

Further Resources:

Lab: Control flow in Python.


Week 3. Wrangling Data with Pandas

This week we shall be exploring Pandas which is one of Python’s main tools. It gives Python a DataFrame similar to the other main data science language R. Pandas can handle heterogeneous data and so it extends the capability of NumPy, which is mostly suited to homogeneous numerical data. Pandas works well with other key Python modules such as scikit-learn (machine learning) and matplotlib. We will also cover common data formats such as JSON (Javascript Object Notation).

Readings:

Lab: More on pandas


Week 4. Creating and Managing Databases

We will return to database normalization, and how to implement this using good practice in a relational database manager, SQLite. We will cover how to structure data, verify data types, set conditions for data integrity, and perform complex queries to extract data from the database.

Readings:

Lab: Classes in Python


Week 5. Collecting Data from the Internet

Publicly accessible application programming interfaces (APIs) provide a common source of “big” data available from a variety of sources, such as social media data. This data consists of a variety of data types, but is usually transmitted in JSON format. In this session, we will cover the basics of APIs, including authentication and the use of protocols for interacting with APIs, and in processing the data that is obtained using these methods. We will also discuss common problems in using text, including character encodings, working with Unicode, transforming text into numeric data, and cleaning textual data for analysis.We will cover basic web scraping, to turn web data into text or numbers.

Readings:

Further Resources:

Lab: More on web scraping and API


Week 6. Reading Week


Week 7. Exploratory data analysis

We will introduce the basic statistical plots that are commonly used in exploratory data analysis. We will first consider standard plots for univariate data analysis, including histograms, empirical distribution functions, as well as plots of summary statistics such as boxplots and violinplots. We will then consider different variants of bar plots, which are commonly used for comparison of parallel batches of data, as well as scatter plots for exploration of correlation patterns in data.

Readings:

Lab: Matplotlib primer and basic statistical plots


Week 8. Matrix data visualization

We will consider how to visualize matrix data such as covariance and other similarity matrices and adjacency matrices of graphs such as those representing social networks. The key here is to use a suitable ordering of matrix rows and columns to visualize any possibly existing clustering structure. We will explain the underlying methods based on spectral theory of matrices, using the concepts of matrix eigenvectors and clustering based on matrix eigenvectors. In particular, we will explain the method based on seriation using the so-called Fiedler eigenvector and spectral co-clustering based on using eigenvectors in combination with k-means clustering method.

Readings:

Lab: Statistical plots using Matplotlib and Seaborn


Week 9. Model evaluation

In this week, we will introduce standard statistical plots for the performance evaluation of statistical models and machine learning algorithms for classification. We will introduce standard statistical plots for assessing the performance of binary classifiers, such as receiver operating characteristic (ROC) and precision-recall (PR) curves. We will learn how to interpret these plots and discuss their advantages and limitations.

We will also discuss various standard metrics used for assessing the performance of binary classifiers, such as accuracy, area under the curve (AUC) and Gini coefficient, discuss their relation to the ROC curve, as well as their advantages and limitations.

Readings:

Lab: Evaluating classifiers using sklearn.metrics


Week 10. Dimensionality reduction

We will consider how to visualize hidden structures in high-dimensional data, such as hidden clusters or embedded low-dimensional manifolds, by using dimensionality reduction methods. We will explain the underlying principles of dimensionality reduction methods such as multidimensional scaling, locally linear embedding, isomap, spectral embedding, and stochastic neighbor embedding. We will see how the geometry, linear algebra and optimisation methods give raise to different dimensionality reduction methods.

Our focus will be on the dimensionality methods that are commonly used in practice and widely available through software libraries such as sklearn.manifold. We will also consider modern tools for visualizing different dimensionality reductions such as Google embedding projector.

Readings:

Lab: Dimensionality reduction using sklearn.manifold


Week 11. Graph data visualization

In the last week, we will consider basic methods for visualization of graph data such as visualizing social network relationships. We will consider different graph layouts and the principles of how they are computed. This will involve methods based on simple principles for drawing graphs that have a tree structure as well as more sophisticated methods based on spectral theory of linear algebra and dynamical systems for general graphs.

Readings:

Lab: Graph drawing using NetworkX