DSPL Tools

DSPL Tools is a small suite of command-line utilities designed to help generate, organize, and validate DSPL datasets. The suite currently includes the following components:

  • DSPL Check: Checks a dataset against a variety of criteria including adherence to the official DSPL schema, consistency of internal references, and CSV layout.
  • DSPL Gen: Generates a simple, DSPL dataset “template” from an input CSV file

This software is released under a BSD license; the full source code is available for browsing and download on the DSPL open source site. Release notes are provided in the DSPL Tools README file.


DSPL Developer Guide

DSPL stands for Dataset Publishing Language. It is a representation format for both the metadata (information about the dataset, such as its name and provider, as well as the concepts it contains and displays) and actual data of datasets. Datasets described in this format can be imported into the Google Public Data Explorer, a tool that allows for rich, visual exploration of the data.

Note: To upload data to Google Public Data using the Public Data upload tool, you must have a Google Account.

This document is intended for data owners who want their content to be available in the Public Data Explorer. It goes beyond the Tutorial by diving deeper into the details of the DSPL schema and supported features. Only a basic familiarity of XML is assumed, although knowledge of relational databases is also useful.

Although not a requirement, we suggest reading through the Tutorial, which is shorter and easier to digest, before looking at this document.

dplyr

dplyr is a new package which provides a set of tools for efficiently manipulating datasets in R. dplyr is the next iteration of plyr, focussing on only data frames. dplyr is faster, has a more consistent API and should be easier to use. There are three key ideas that underlie dplyr:

  1. Your time is important, so Romain Francois has written the key pieces in Rcpp to provide blazing fast performance. Performance will only get better over time, especially once we figure out the best way to make the most of multiple processors.
  2. Tabular data is tabular data regardless of where it lives, so you should use the same functions to work with it. With dplyr, anything you can do to a local data frame you can also do to a remote database table. PostgreSQL, MySQL, SQLite and Google bigquery support is built-in; adding a new backend is a matter of implementing a handful of S3 methods.
  3. The bottleneck in most data analyses is the time it takes for you to figure out what to do with your data, and dplyr makes this easier by having individual functions that correspond to the most common operations (group_by, summarise, mutate, filter, select and arrange). Each function does one only thing, but does it well.

Dalvik VM Internals

Dan Bornstein (Google)

Dalvik — the virtual machine with the unusual name — runs your code on Android. Join us to learn about the motivation for its design and get
some details about how it works. You’ll also walk away with a few tips for how to write code that works well with the platform. Be prepared
for a deep dive into technical details. Questions encouraged!

Presentation Slides
Handouts