lse-st445.github.io

LSE

ST445 Managing and Visualizing Data

Michaelmas Term 2017

Instructors

Teaching Assistant

Course Information

No lectures or classes will take place during School Reading Week 6.

Week Topic Week Topic
1 Introduction to Data 7 Exploratory data analysis
2 The shape of data 8 Exploratory data analysis (cont’d)
3 Creating and managing databases 9 Model evaluation
4 Using data from the Internet 10 Dimensionality reduction
5 Working with APIs 11 Graph data visualization
6 Reading Week    

Course Description

This course will cover the principles of digital methods for storing and structuring data, including data types, relational and non-relational database design, and query languages. Students will learn to build, populate, manipulate and query databases based on datasets relevant to their fields of interest. The course will also cover workflow management for typical data transformation and cleaning projects, frequently the starting point and most time-consuming part of any data science project. This course uses a project-based learning approach towards the study of online publishing and group -based collaboration, essential ingredients of modern data science projects. The coverage of data sharing will include key skills in on-line publishing, including the elements of web design, the technical elements of web technologies and web programming, as well as the use of revision- control and group collaboration tools such as GitHub. Each student will build one or more interactive website based on content relevant to his/her domain -related interests, and will use GitHub for accessing and submitting course materials and assignments.

A core objective of this course is to provide students with a well-rounded sense of “data science literacy”, meaning you will become familiar with the core structures, terms, protocols, and software that forms the core material of data science and applied computing. This is a broad category, covering abstract concepts such as database normal forms and complex data structures, but also covers a range of simple tools and formats such as markup languages, web publishing, and working with APIs (application programming interfaces). In the second half of the course, we will focus on communicating results visually through turning data into plots and other visualizations.

On the theory side, introduce principles and applications of the electronic storage, structuring, manipulation, transformation, extraction, and dissemination of data. This includes data types, database design, data base implementation, and data analysis through structured queries. Through joining operations, we will also cover the challenges of data linkage and how to combine datasets from different sources. We begin by discussing concepts in fundamental data types, and how data is stored and recorded electronically. We will cover database design, especially relational databases, using substantive examples across a variety of fields. Students are introduced to SQL through MySQL, and programming assignments in this unit of the course will be designed to insure that students learn to create, populate and query an SQL database. We will briefly compare relational databases to other formats of database manager, the “NoSQL” types such as MongoDB, including the JSON data format. Students will be encouraged to work with data relevant to their own interests as they learn to create, populate and query data.

On the practical side, we will cover a variety of tools with which every data scientist should be familiar, including revision control tools, web publishing formats, tools and commands for reshaping and recasting data, how to work with different data formats, how to merge and link data, and how to publish a website.

In the data visualisation part of the course, we will cover a variety of principles, tools, and methods for visualizing data.

For the final project, we will provide you with a dataset, which you will be expected to transform in order to produce visualizations.

Organization

This course is an introduction to the fundamental concepts of data and data visualization for students and assumes no prior knowledge of these concepts.

The course will involve 20 hours of lectures and 15 hours of computer workshops in the MT.

Prerequisites

No prior experience with programming is required.

Software

We will use some tools, notably SQLite, R, and Python, but these will be used in coordination with MY470 (Computer Programming) where their use will be covered more formally. Lectures and assignments will be posted on Github, Students are expected to use Github also to submit problem sets and final exam.

Where appropriate, we will use Jupyter notebooks for lab assignments, demonstrations, and the course notes themselves.

Assessment

Project assignment (60%) and continuous assessment in weeks 3, 6, 8, 10 (10% each). Students will be expected to produce 10 problem sets in the MT.

Schedule


Week 1. Introduction to Data

In the first week, we will introduce the basic concepts of the course, including how data is recorded, stored, and shared. Because the course relies fundamentally on GitHub, a collaborative code and data sharing platform, we will introduce the use of git and GitHub, using the lab session to guide students through in setting up an account and subscribing to the course organisation and assignments.

This week will also introduce basic data types, in a language-agnostic manner, from the perspective of machine implementations through to high-level programming languages. We will introduce the notion of databases and database managers, and the client-server model.

Lecture Notes:

Readings:

Further Readings:

Lab: Working with git and GitHub.


Week 2. The shape of data

This week moves beyond the rectangular format common in statistical datasets, modeled on a spreadsheet, to cover relational structures and the concept of database normalization. We will also cover ways to restructure data from “wide” to “long” format, within strictly rectangular data structures. Additional topics concerning text encoding, date formats, and sparse matrix formats are also covered.

Readings:

Further Resources:

Lecture Notes:

Lab: Reshaping and data in R See also:


Week 3. Creating and managing databases

We will return to database normalization, and how to implement this using good practice in a relational database manager, SQLite. We will cover how to structure data, verify data types, set conditions for data integrity, and perform complex queries to extract data from the database. We will also cover authentication and how to connect to local and remote databases. Finally, for a comparison, we will show a different (non-relational) database model through MongoDB, contrasting this to the relational paradigm.

Readings:

Further Resources:

Lecture Notes:

Lab: Working with a relational database manager


Week 4. Using data from the Internet

This week covers markup languages, content style sheets, and web protocols for publishing and transmitting data. Continuing from the material covered in the first week lab session, we will cover markup languages, including HTML, XML, and Markdown, as well as common data formats such as JSON (Javascript Object Notation). We will cover basic web scraping, to turn web data into text or numbers. We will also cover the client-server model, and how machines and humans transmit data over networks and to and from databases.

Readings:

Further Resources:

Lecture Notes:

Lab: Scraping data from the web


Week 5. Working with APIs

Publicly accessible application programming interfaces (APIs) provide a common source of “big” data available from a variety of sources, such as social media data. This data consists of a variety of data types, but is usually transmitted in JSON format. In this session, we will cover the basics of APIs, including authentication and the use of protocols for interacting with APIs, and in processing the data that is obtained using these methods. We will also discuss common problems in using text, including character encodings, working with Unicode, transforming text into numeric data, and cleaning textual data for analysis.

Readings:

Further Resources:

Lecture Notes:

Lab: Working with social media data: Twitter


Week 6. Reading Week


Week 7. Exploratory data analysis

We will introduce the basic statistical plots that are commonly used in exploratory data analysis. We will first consider standard plots for univariate data analysis, including histograms, empirical distribution functions, as well as plots of summary statistics such as boxplots. We will then consider different variants of bar plots, which are commonly used for comparison of parallel batches of data.

Readings:

Lab: Matplotlib primer and basic statistical plots


Week 8. Exploratory data analysis (cont’d)

We will continue our consideration of data visualizations for exploratory data analysis by examining various other statistical plots, primarily focusing to multivariate data analysis and time series data. We will consider the use of scatter plots and heatmaps.

Readings:

Lab: Statistical plots using Matplotlib and Seaborn


Week 9. Model evaluation

In this week, we will introduce standard statistical plots for the performance evaluation of statistical models and machine learning algorithms for classification. We will introduce standard statistical plots for assessing the performance of binary classifiers, such as receiver operating characteristic (ROC) and precision-recall (PR) curves. We will learn how to interpret these plots and discuss their advantages and limitations.

We will also discuss various standard metrics used for assessing the performance of binary classifiers, such as accuracy, area under the curve (AUC) and Gini coefficient, discuss their relation to the ROC curve, as well as their advantages and limitations.

Readings:

Lab: Evaluating classifiers using sklearn.metrics


Week 10. Dimensionality reduction

We will consider how to visualize hidden structures in a high-dimensional data, such as hidden clusters or low-dimensional manifolds, by using dimensionality reduction methods. We will explain the underlying principles of dimensionality reduction methods such as multidimensional scaling, locally linear embedding, isomap, spectral embedding, and stochastic neighbor embedding. We will see how geometry, linear algebra and optimisation methods give raise to different dimensionality reduction methods.

Our focus will be on the dimensionality methods that are commonly used in practice and widely available through software libraries such as sklearn.manifold. We will also consider modern applications for visualizing different dimensionality reductions such as Google embedding projector.

Readings:

Lab: Dimensionality reduction using sklearn.manifold


Week 11. Graph data visualization

In the last week, we will consider the basic methods for visualization of graph data such as visualizing social network relationships. We will consider different graph layouts and the principles of how they are computed. This will involve methods based on simple principles for drawing graphs that have a tree structure as well as more sophisticated methods based on spectral theory of linear algebra and dynamical systems for general graphs.

Readings:

Lab: Graph drawing using NetworkX