The goal of this module is to show how to construct feature vectors from the raw event sequences data through Hadoop Pig, a high-level data processing tool on top of Hadoop MapReduce. Instead of writing a Java program, you will write a high level script using Pig Latin and let the framework translate it into MapReduce jobs for you.
Throughout this training, you will learn how to run Pig interactively and run the Pig script. We will first demonstrate basic knowledge of Pig in terms of interactive shell and data type, then show how to complete the feature construction task step by step. The high-level process of feature construction is depicted below
Pig provides a shell to manipulate data interactively. Let's start a shell and run that in local mode for demo purpose
and you will see the prompt as
Next, you can input a Pig Latin statement, the basic construct for using Pig. For example,
Here we call the case_events
a relation in Pig Latin. In this statement, we load data from data/case.csv
file into case_events
relation. We also specified the schema of the data as
which defines a four-field tuple with names and types of each field corresponding to our raw data. Here we use the PigStorage
, the most common adapter in Pig to load/save data from/into file system (including HDFS). Of course you can load data from other sources (such as databases) using other Storage
interface.
You can check the schema using DESCRIBE
operator
and collect display data with DUMP
Sometimes, DUMP
with generate a lot of output but you may just want to see few examples. Pig itself doesn't have operator like head, instead you can
to print top 10 items in relation A
.
Pig will not run immediately after you input a statement. Only when you need to save
or dump
, will Pig actually run. The good part of this property is that internally Pig can be optimized. A potential problem is that you may not realize you have made a mistake until you reach a later statement that has output. If you are not sure, on small data set, you can dump
frequently.
The shell also provides other commands. Important ones include (but not limited to)
fs
: serve same purpose as hdfs dfs
, so that your can type fs -ls
directly in pig shell instead of hdfs dfs -ls
.pwd
: check present working directory in case file is not found.type help
to learn more about these commands in pig shell. Pig operators covered in later example are listed in the table below, please refer to Pig Official Documentation to learn more.
Operator | Explaination |
---|---|
DISTINCT | Removes duplicate tuples in a relation |
FILTER | Selects tuples from a relation based on some condition |
FOREACH | Generates data transformations based on columns of data |
GROUP | Groups the data in one or more relations |
JOIN (inner) | Performs an inner join of two or more relations based on common field values |
LIMIT | Limits the number of output tuples |
LOAD | Loads data from the file system |
ORDER BY | Sorts a relation based on one or more fields |
RANK | Returns each tuple with the rank within a relation |
SPLIT | Partitions a relation into two or more relations |
STORE | Stores or saves results to the file system |
UNION | Computes the union of two or more relations, does not eliminate duplicate tuples |
REGISTER | Registers a JAR file so that the UDFs in the file can be used |
Finally, type quit
to leave the shell.
In this section, we briefly describe data types. Pig can work with simple types like int
, double
. More important types are tuple
and bag
.
Tuple is usually represented with ()
, for example
In Pig Latin, we can either fetch fields by index (like $0
) or by name (like patientid
). With index we can also fetch a range of fields. For example $2..
means 2-nd to last.
Bag is usually denoted with {}
, from result of DESCRIBE case_events
we can see case_events
itself is a bag. You can regard bag as a special unordered set
that doesn't check for duplicates.
Check out the official documentation about data type for more. You will find examples of the type in the samples below. Pay attention to the result of DESCRIBE
where you will find types and names of fields.
Next, you will learn by practicing how to construct feature for predictive modeling. You will learn built-in operators like GROUP BY
, JOIN
as well as User Defined Function (UDF) in python. The result of feature construction will be feature matrix that can be used by a lot of machine learning packages.
Feature construction works by like shown below, where sample data format of each step is depicted.
We will start from loading raw data. Then we extract the prediction target (i.e. whether the patient will have heart failure or not). Next, we filter and aggregate events of patient into features. After that we need to link prediction target and features to generate complete training/testing samples. Finally we split the data into training and testing sets and save.
First, make sure you are in bigdata-bootcamp/sample/pig
folder and you can check availability of raw data file by
Then, let's load the data as a relation
Our data set can be used for predicting heart failure (HF), and we want to predict heart failure one year before it happens. As a result, we need to find the heart failure event date (for case patient, event value of 1 means HF happened; for control patient value is 0 as there's no HF) and filter out events that happened within one year to HF. As illustrated in above figure, we will need to find the HF diagnostic date and use that date to filter out events within the prediction window only.
After JOIN
we have some redundant fields that we will no longer need, so that we can project filtered_events
into a simpler format.
Notice that as dateoffset is no longer useful after filtering, we dropped that.
Our raw data is event sequence. In order to aggregate that into feature suitable for machine learning, we can sum up event value as feature value corresponds to the given event. Each event type will become a feature and we will directly use event name as feature name. For example, given below raw event sequence for a patient
We can get feature name value pair for this patient with ID FBFD014814507B5C
as
The code below code will aggregate filtered_events
from previous filter step into tuples in (patientid, feature name, feature value)
format
In a machine learning setting, we want to assign an index to each different feature rather than directly use its name. For example, DIAG38845 corresponds to feature-id=1 and DIAGV6546 corresponds to feature-id=2.
The code below is used to extract unique feature names using the DISTINCT
operator and assign an index to feature name with RANK
operator.
Next, we can update feature_name_values
to use feature index rather than feature name.
Now, we are approaching the final step. We need to create a feature vector for each patient. Our ultimate result will convert each patient into a feature vector associated with target we want to predict. We already got target in the targets
relation. Our final representation is shown below:
For example, given patient 2363A06EF118B098
with the following features and doesn't have heart failure (target value is 0)
we will encode the patient features as
notice that the feautreid
is in increasing order and this is required by a lot of machine learning packages. We call such target (aka label) and features pair a sample
.
Let's group feature_id_values
by patientid and check the structure
We find that feature_id_values
is a bag and want to convert it into a string like 1:60 4:30 9:60 23:10 45:90
as mentioned above. Here we will employ UDF defined in utils.py
as
The script simply enumerates all tuples from input
and forms id-value pairs then join. @outputSchema("feature:chararray")
specifies the return value name and tuple. In order to use that, we need to register it first
Next, we can join targets
and feature_vectors
to asscociate feature vector with target
We are almost there, just save the samples
. In a machine learning settings it is common practice to split data into training and testing samples. We can do that by associating each sample with a random key and split with that random key.
Then, we can save
Running commands interactively is efficient, but sometimes we want to save the commands for future reuse. We can save the commands we run into a script file (i.e. features.pig
) and run the entire script in batch mode.
You can check this out in sample/pig folder. Navigate to there and run the script simply with
Exercise: Use data one year before but no earlier than 2 years(i.e. 1 year observation window size).
Additional conditions can be applied together with 1 year prediction window. i.e.