NOTICE
The upcoming release of Featuretools 1.0.0 contains several breaking changes. Users are encouraged to test this version prior to release by installing from GitHub:
pip install https://github.com/alteryx/featuretools/archive/woodwork-integration.zip
For details on migrating to the new version, refer to Transitioning to Featuretools Version 1.0. Please report any issues in the Featuretools GitHub repo or by messaging in Alteryx Open Source Slack.
Here we are attempting to answer some commonly asked questions that appear on Github, and Stack Overflow.
[1]:
import featuretools as ft import pandas as pd import numpy as np import woodwork as ww
EntitySet
After you create your EntitySet, you may wish to view the column names. An EntitySet contains multiple DataFrames, one for each table in the EntitySet.
[2]:
es = ft.demo.load_mock_customer(return_entityset=True) es
Entityset: transactions DataFrames: transactions [Rows: 500, Columns: 6] products [Rows: 5, Columns: 3] sessions [Rows: 35, Columns: 5] customers [Rows: 5, Columns: 5] Relationships: transactions.product_id -> products.product_id transactions.session_id -> sessions.session_id sessions.customer_id -> customers.customer_id
If you want to view the underlying Dataframe, you can do the following:
[3]:
es['transactions'].head()
If you want view the columns and types for the “transactions” DataFrame, you can do the following:
[4]:
es['transactions'].ww
copy_columns
additional_columns
The function normalize_dataframe creates a new DataFrame and a relationship from unique values of an existing DataFrame. It takes 2 similar arguments:
normalize_dataframe
additional_columns removes columns from the base DataFrame and moves them to the new DataFrame.
copy_columns keeps the given columns in the base DataFrame, but also copies them to the new DataFrame.
[5]:
data = ft.demo.load_mock_customer() transactions_df = data["transactions"].merge(data["sessions"]).merge(data["customers"]) products_df = data["products"] es = ft.EntitySet(id="customer_data") es = es.add_dataframe(dataframe_name="transactions", dataframe=transactions_df, index="transaction_id", time_index="transaction_time") es = es.add_dataframe(dataframe_name="products", dataframe=products_df, index="product_id") es = es.add_relationship("products", "product_id", "transactions", "product_id")
Before we normalize to create a new DataFrame, let’s look at the base DataFrame
[6]:
Notice the columns session_id, session_start, join_date, device, customer_id, and zip_code.
session_id
session_start
join_date
device
customer_id
zip_code
[7]:
es = es.normalize_dataframe(base_dataframe_name="transactions", new_dataframe_name="sessions", index="session_id", make_time_index="session_start", additional_columns=["join_date"], copy_columns=["device", "customer_id", "zip_code","session_start"])
Above, we normalized the columns to create a new DataFrame.
For additional_columns, the following column ['join_date] will be removed from the transactions DataFrame, and moved to the new sessions DataFrame.
['join_date]
transactions
sessions
For copy_columns, the following columns ['device', 'customer_id', 'zip_code','session_start'] will be copied from the transactions DataFrame to the new sessions DataFrame.
['device', 'customer_id', 'zip_code','session_start']
Let’s see this in the actual EntitySet.
[8]:
Notice above how ['device', 'customer_id', 'zip_code','session_start'] are still in the transactions DataFrame, while ['join_date'] is not. But, they have all been moved to the sessions DataFrame, as seen below.
['join_date']
[9]:
es['sessions'].head()
During the creation of your EntitySet, you might be wondering why the semantic tags in your columns change.
[10]:
data = ft.demo.load_mock_customer() transactions_df = data["transactions"].merge(data["sessions"]).merge(data["customers"]) products_df = data["products"] es = ft.EntitySet(id="customer_data") es = es.add_dataframe(dataframe_name="transactions", dataframe=transactions_df, index="transaction_id", time_index="transaction_time") es.plot()
If a column contains semantic tags, they will appear on the right side of a semicolon in the plot above. Notice how session_id and session_start do not have any semantic tags currently associated to them.
Now, let’s normalize the transactions DataFrame to create a new DataFrame.
[11]:
es = es.normalize_dataframe(base_dataframe_name="transactions", new_dataframe_name="sessions", index="session_id", make_time_index="session_start", additional_columns=["session_start"]) es.plot()
The session_id now has the sematic tag foreign_key in the transactions DataFrame, and index in the new DataFrame, sessions. This is the case because when we normalize the DataFrame, we create a new relationship between the transactions and sessions. There is a one to many relationship between the parent DataFrame, sessions, and child DataFrame, transactions.
foreign_key
index
Therefore, session_id has the semantic tag foreign_key in transactions because it represents an index in another DataFrame. There would be a similar effect if we added another DataFrame using add_dataframe and add_relationship.
add_dataframe
add_relationship
In addition, when we created the new DataFrame, we set session_start as the time_index. This added the semantic tag time_index to the session_start column in the new sessions DataFrame because it now represents a time_index.
time_index
You can directly update the description or metadata attributes of the column schema. However, you must specifically use the column schema returned by DataFrame.ww.columns['col_name'], not DataFrame.ww['col_name'].ww.schema. The column schema from DataFrame.ww.columns['col_name'] is still associated with the EntitySet and propagates any attribute updates, whereas the other does not. As an example, this is how you can update a column’s description or metadata:
DataFrame.ww.columns['col_name']
DataFrame.ww['col_name'].ww.schema
column_schema = df.ww.columns['col_name'] column_schema.description = 'my description' column_schema.metadata.update(key='value')
You might want to create features that are conditioned on multiple values before they are calculated. This would require the use of interesting_values. However, since we are trying to create the feature with multiple conditions, we will need to modify the Dataframe before we create the EntitySet.
interesting_values
Let’s look at how you might accomplish this.
First, let’s create our Dataframes.
[12]:
data = ft.demo.load_mock_customer() transactions_df = data["transactions"].merge(data["sessions"]).merge(data["customers"]) products_df = data["products"]
[13]:
transactions_df.head()
[14]:
products_df.head()
Now, let’s modify our transactions Dataframe to create the additional column that represents multiple conditions for our feature.
[15]:
transactions_df['product_id_device'] = transactions_df['product_id'].astype(str) + ' and ' + transactions_df['device']
Here, we created a new column called product_id_device, which just combines the product_id column, and the device column.
product_id_device
product_id
Now let’s create our EntitySet.
[16]:
es = ft.EntitySet(id="customer_data") es = es.add_dataframe(dataframe_name="transactions", dataframe=transactions_df, index="transaction_id", time_index="transaction_time", logical_types={"product_id": ww.logical_types.Categorical, "product_id_device": ww.logical_types.Categorical, "zip_code": ww.logical_types.PostalCode}) es = es.add_dataframe(dataframe_name="products", dataframe=products_df, index="product_id") es = es.normalize_dataframe(base_dataframe_name="transactions", new_dataframe_name="sessions", index="session_id", additional_columns=["device", "product_id_device", "customer_id"]) es = es.normalize_dataframe(base_dataframe_name="sessions", new_dataframe_name="customers", index="customer_id") es
Entityset: customer_data DataFrames: transactions [Rows: 500, Columns: 9] products [Rows: 5, Columns: 2] sessions [Rows: 35, Columns: 5] customers [Rows: 5, Columns: 2] Relationships: transactions.session_id -> sessions.session_id sessions.customer_id -> customers.customer_id
Now, we are ready to add our interesting values.
First, let’s view our options for what the interesting values could be.
[17]:
interesting_values = transactions_df['product_id_device'].unique().tolist() interesting_values
['5 and desktop', '2 and desktop', '3 and desktop', '4 and desktop', '1 and desktop', '4 and mobile', '5 and mobile', '1 and mobile', '3 and mobile', '2 and mobile', '4 and tablet', '3 and tablet', '2 and tablet', '1 and tablet', '5 and tablet']
If you wanted to, you could pick a subset of these, and the where features created would only use those conditions. In our example, we will use all the possible interesting values.
where
Here, we set all of these values as our interesting values for this specific DataFrame and column. If we wanted to, we could make interesting values in the same way for more than one column, but we will just stick with this one for this example.
[18]:
values = {'product_id_device': interesting_values} es.add_interesting_values(dataframe_name='sessions', values=values)
Now we can run DFS.
[19]:
feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", agg_primitives=["count"], where_primitives=["count"], trans_primitives=[]) feature_matrix.head()
5 rows × 32 columns
To better understand the where clause features, let’s examine one of those features. The feature COUNT(sessions WHERE product_id_device = 5 and tablet), tells us how many sessions the customer purchased product_id 5 while on a tablet. Notice how the feature depends on multiple conditions (product_id = 5 & device = tablet).
COUNT(sessions WHERE product_id_device = 5 and tablet)
[20]:
feature_matrix[["COUNT(sessions WHERE product_id_device = 5 and tablet)"]]
Support for Dask EntitySets and Koalas EntitySets is still in Beta - if you encounter any errors using either of these approaches, please let us know by creating a new issue on Github.
Yes! Featuretools supports creating an EntitySet from Dask dataframes or from Koalas dataframes. You can simply follow the same process you would when creating an EntitySet from pandas dataframes.
There are some limitations to be aware of when using Dask or Koalas dataframes. When creating a DataFrame, type inference can significantly slow down the runtime compared to pandas DataFrames, so users are encouraged to specify logical types for all columns during creation. Also, other quality checks are not performed, such as checking for unique index values. An EntitySet must be created entirely of one type of DataFrame (Dask, Koalas, or pandas) - you cannot mix pandas DataFrames, Dask DataFrames, and Koalas DataFrames with each other in the same EntitySet.
DataFrame
For more information on creating an EntitySet from Dask dataframes or from Koalas dataframes, see the Using Dask EntitySets and the Using Koalas EntitySets guides.
You may have created your EntitySet, and then applied DFS to create features. However, you may be puzzled as to why no aggregation features were created.
This is most likely because you have a single DataFrame in your EntitySet, and DFS is not capable of creating aggregation features with fewer than 2 DataFrames. Featuretools looks for a relationship, and aggregates based on that relationship.
Let’s look at a simple example.
[21]:
data = ft.demo.load_mock_customer() transactions_df = data["transactions"].merge(data["sessions"]).merge(data["customers"]) es = ft.EntitySet(id="customer_data") es = es.add_dataframe(dataframe_name="transactions", dataframe=transactions_df, index="transaction_id") es
Entityset: customer_data DataFrames: transactions [Rows: 500, Columns: 11] Relationships: No relationships
Notice how we only have 1 DataFrame in our EntitySet. If we try to create aggregation features on this EntitySet, it will not be possible because DFS needs 2 DataFrames to generate aggregation features.
[22]:
feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="transactions") feature_defs
/home/docs/checkouts/readthedocs.org/user_builds/feature-labs-inc-featuretools/envs/woodwork-integration/lib/python3.7/site-packages/featuretools/synthesis/deep_feature_synthesis.py:156: UserWarning: Only one dataframe in entityset, changing max_depth to 1 since deeper features cannot be created warnings.warn("Only one dataframe in entityset, changing max_depth to "
[<Feature: session_id>, <Feature: product_id>, <Feature: amount>, <Feature: customer_id>, <Feature: device>, <Feature: zip_code>, <Feature: DAY(date_of_birth)>, <Feature: DAY(join_date)>, <Feature: DAY(session_start)>, <Feature: DAY(transaction_time)>, <Feature: MONTH(date_of_birth)>, <Feature: MONTH(join_date)>, <Feature: MONTH(session_start)>, <Feature: MONTH(transaction_time)>, <Feature: WEEKDAY(date_of_birth)>, <Feature: WEEKDAY(join_date)>, <Feature: WEEKDAY(session_start)>, <Feature: WEEKDAY(transaction_time)>, <Feature: YEAR(date_of_birth)>, <Feature: YEAR(join_date)>, <Feature: YEAR(session_start)>, <Feature: YEAR(transaction_time)>]
None of the above features are aggregation features. To fix this issue, you can add another DataFrame to your EntitySet.
Solution #1 - You can add new DataFrame if you have additional data.
[23]:
products_df = data["products"] es = es.add_dataframe(dataframe_name="products", dataframe=products_df, index="product_id") es
Entityset: customer_data DataFrames: transactions [Rows: 500, Columns: 11] products [Rows: 5, Columns: 2] Relationships: No relationships
Notice how we now have an additional DataFrame in our EntitySet, called products.
products
Solution #2 - You can normalize an existing DataFrame.
[24]:
es = es.normalize_dataframe(base_dataframe_name="transactions", new_dataframe_name="sessions", index="session_id", make_time_index="session_start", additional_columns=["device", "customer_id", "zip_code", "join_date"], copy_columns=["session_start"]) es
Entityset: customer_data DataFrames: transactions [Rows: 500, Columns: 7] products [Rows: 5, Columns: 2] sessions [Rows: 35, Columns: 6] Relationships: transactions.session_id -> sessions.session_id
Notice how we now have an additional DataFrame in our EntitySet, called sessions. Here, the normalization created a relationship between transactions and sessions. However, we could have specified a relationship between transactions and products if we had only used Solution #1.
Now, we can generate aggregation features.
[25]:
feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="transactions") feature_defs[:-10]
[<Feature: session_id>, <Feature: product_id>, <Feature: amount>, <Feature: DAY(date_of_birth)>, <Feature: DAY(session_start)>, <Feature: DAY(transaction_time)>, <Feature: MONTH(date_of_birth)>, <Feature: MONTH(session_start)>, <Feature: MONTH(transaction_time)>, <Feature: WEEKDAY(date_of_birth)>, <Feature: WEEKDAY(session_start)>, <Feature: WEEKDAY(transaction_time)>, <Feature: YEAR(date_of_birth)>, <Feature: YEAR(session_start)>, <Feature: YEAR(transaction_time)>, <Feature: sessions.device>, <Feature: sessions.customer_id>, <Feature: sessions.zip_code>, <Feature: sessions.COUNT(transactions)>, <Feature: sessions.MAX(transactions.amount)>, <Feature: sessions.MEAN(transactions.amount)>, <Feature: sessions.MIN(transactions.amount)>, <Feature: sessions.MODE(transactions.product_id)>, <Feature: sessions.NUM_UNIQUE(transactions.product_id)>, <Feature: sessions.SKEW(transactions.amount)>]
A few of the aggregation features are:
<Feature: sessions.MAX(transactions.amount)>
<Feature: sessions.SKEW(transactions.amount)>
<Feature: sessions.MIN(transactions.amount)>
<Feature: sessions.MEAN(transactions.amount)>
<Feature: sessions.COUNT(transactions)>
One issue you may encounter while running ft.dfs is slow performance. While Featuretools has generally optimal default settings for calculating features, you may want to speed up performance when you are calculating on a large number of features.
ft.dfs
One quick way to speed up performance is by adjusting the n_jobs settings of ft.dfs or ft.calculate_feature_matrix.
n_jobs
ft.calculate_feature_matrix
# setting n_jobs to -1 will use all cores feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", n_jobs=-1) feature_matrix, feature_defs = ft.calculate_feature_matrix(entityset=es, features=feature_defs, n_jobs=-1)
For more ways to speed up performance, please visit:
Improving Computational Performance
When using DFS to generate features, you may wish to include only certain features. There are multiple ways that you do this:
Use ignore_columns to specify columns in a DataFrame that should not be used to create features. It is a dictionary mapping dataframe names to a list of column names to ignore.
ignore_columns
Use drop_contains to drop features that contain any of the strings listed in this parameter.
drop_contains
Use drop_exact to drop features that exactly match any of the strings listed in this parameter.
drop_exact
Here is an example of using all three parameters:
[26]:
es = ft.demo.load_mock_customer(return_entityset=True) feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", ignore_columns={ "transactions": ["amount"], "customers": ["age", "gender", "date_of_birth"] }, # ignore these columns drop_contains=["customers.SUM("], # drop features that contain these strings drop_exact=["STD(transactions.quanity)"]) # drop features that exactly match
When using DFS to generate features, you may wish to use only certain features or DataFrames for specific primitives. This can be done through the primitive_options parameter. The primitive_options parameter is a dictionary that maps a primitive or a tuple of primitives to a dictionary containing options for the primitive(s). A primitive or tuple of primitives can also be mapped to a list of option dictionaries if the primitive(s) takes multiple inputs. The primitive keys can be the string names of the primitive, the primitive class, or specific instances of the primitive. Each dictionary supplies options for their respective input column. There are multiple ways to control how primitives get applied through these options:
primitive_options
Use ignore_dataframes to specify DataFrames that should not be used to create features for that primitive. It is a list of DataFrame names to ignore.
ignore_dataframes
Use include_dataframes to specify the only DataFrames to be included to create features for that primitive. It is a list of DataFrame names to include.
include_dataframes
Use ignore_columns to specify columns in a DataFrame that should not be used to create features for that primitive. It is a dictionary mapping a DataFrame name to a list of column names to ignore.
Use include_columns to specify the only columns in a DataFrame that should be used to create features for that primitive. It is a dictionary mapping a DataFrame name to a list of column names to include.
include_columns
You can also use primitive_options to specify which DataFrames or columns you wish to use as groupbys for groupby transformation primitives:
Use ignore_groupby_dataframes to specify DataFrames that should not be used to get groupbys for that primitive. It is a list of DataFrame names to ignore.
ignore_groupby_dataframes
Use include_groupby_dataframes to specify the only DataFrames that should be used to get groupbys for that primitive. It is a list of DataFrame names to include.
include_groupby_dataframes
Use ignore_groupby_columns to specify columns in a DataFrame that should not be used as groupbys for that primitive. It is a dictionary mapping a DataFrame name to a list of column names to ignore.
ignore_groupby_columns
Use include_groupby_columns to specify the only columns in a DataFrame that should be used as groupbys for that primitive. It is a dictionary mapping a DataFrame name to a list of column names to include.
include_groupby_columns
Here is an example of using some of these options:
[27]:
es = ft.demo.load_mock_customer(return_entityset=True) feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", primitive_options={"mode": {"ignore_dataframes": ["sessions"], "ignore_columns": {"products": ["brand"], "transactions": ["product_id"]}}, # For mode, ignore the "sessions" DataFrame and only include "brands" in the # "products" dataframe and "product_id" in the "transactions" DataFrame ("count", "mean"): {"include_dataframes": ["sessions", "transactions"]} # For count and mean, only include the dataframes "sessions" and "transactions" })
Note that if options are given for a specific instance of a primitive and for the primitive generally (either by string name or class), the instances with their own options will not use the generic options. For example, in this case:
special_mean = Mean() options = { special_mean: {'include_dataframes': ['customers']}, 'mean': {'include_dataframes': ['sessions']}
the primitive special_mean will not use the DataFrame sessions because it’s options have it only include customers. Every other instance of the Mean primitive will use the 'mean' options.
special_mean
customers
Mean
'mean'
For more examples of specifying options for DFS, please visit:
Specifying Primitive Options
The cutoff time will be set to the current time using cutoff_time = datetime.now().
cutoff_time = datetime.now()
You may encounter a situation when you wish to make prediction using only a certain amount of historical data. You can accomplish this using the training_window parameter in ft.dfs. When you use the training_window, Featuretools will use the historical data between the cutoff_time and cutoff_time - training_window.
training_window
cutoff_time
cutoff_time - training_window
In order to make the calculation, Featuretools will check the time in the time_index column of the target_dataframe.
target_dataframe
[28]:
es = ft.demo.load_mock_customer(return_entityset=True) es['customers'].ww.time_index
'join_date'
Our target_dataframe has a time_index, which is needed for the training_window calculation. Here, we are creating a cutoff time DataFrame so that we can have a unique training window for each customer.
[29]:
cutoff_times = pd.DataFrame() cutoff_times['customer_id'] = [1, 2, 3, 1] cutoff_times['time'] = pd.to_datetime(['2014-1-1 04:00', '2014-1-1 05:00', '2014-1-1 06:00', '2014-1-1 08:00']) cutoff_times['label'] = [True, True, False, True] feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", cutoff_time=cutoff_times, cutoff_time_in_index=True, training_window="1 hour") feature_matrix.head()
4 rows × 76 columns
Above, we ran DFS with training_window argument of 1 hour to create features that only used customer data collected in the last hour (from the cutoff time we provided).
1 hour
You can run DFS on a single table. Featuretools will be able to generate features for your data, but only transform features.
For example:
[30]:
transactions_df = ft.demo.load_mock_customer(return_single_table=True) es = ft.EntitySet(id="customer_data") es = es.add_dataframe(dataframe_name="transactions", dataframe=transactions_df, index="transaction_id", time_index="transaction_time") feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="transactions", trans_primitives=['time_since', 'day', 'is_weekend', 'cum_min', 'minute', 'weekday', 'percentile', 'year', 'week', 'cum_mean'])
Before we examine the output, let’s look at our original single table.
[31]:
Now we can look at the transformations that Featuretools was able to apply to this single DataFrame to create feature matrix.
[32]:
feature_matrix.head()
5 rows × 44 columns
One concern you might have with using DFS is about label leakage. You want to make sure that labels in your data aren’t used incorrectly to create features and the feature matrix.
Featuretools is particularly focused on helping users avoid label leakage.
There are two ways to prevent label leakage depending on if your data has timestamps or not.
In the case where you do not have timestamps, you can create one EntitySet using only the training data and then run ft.dfs. This will create a feature matrix using only the training data, but also return a list of feature definitions. Next, you can create an EntitySet using the test data and recalculate the same features by calling ft.calculate_feature_matrix with the list of feature definitions from before.
Here is what that flow would look like:
First, let’s create our training data.
[33]:
train_data = pd.DataFrame({"customer_id": [1, 2, 3, 4, 5], "age": [40, 50, 10, 20, 30], "gender": ["m", "f", "m", "f", "f"], "signup_date": pd.date_range('2014-01-01 01:41:50', periods=5, freq='25min'), "labels": [True, False, True, False, True]}) train_data.head()
Now, we can create an entityset for our training data.
[34]:
es_train_data = ft.EntitySet(id="customer_train_data") es_train_data = es_train_data.add_dataframe(dataframe_name="customers", dataframe=train_data, index="customer_id") es_train_data
Entityset: customer_train_data DataFrames: customers [Rows: 5, Columns: 5] Relationships: No relationships
Next, we are ready to create our features, and feature matrix for the training data. We don’t want Featuretools to use the labels column to build new features, so we will use the ignore_columns option to exclude it. This would also remove the labels column from the feature matrix, so we will tell DFS to include it as a seed feature.
[35]:
labels_feature = ft.Feature(es_train_data['customers'].ww['labels']) feature_matrix_train, feature_defs = ft.dfs(entityset=es_train_data, target_dataframe_name="customers", ignore_columns={"customers": ["labels"]}, seed_features=[labels_feature]) feature_matrix_train
We will also encode our feature matrix to make machine learning compatible features.
[36]:
feature_matrix_train_enc, features_enc = ft.encode_features(feature_matrix_train, feature_defs) feature_matrix_train_enc.head()
Notice how the whole feature matrix only includes numeric and boolean values now.
Now we can use the feature definitions to calculate our feature matrix for the test data, and avoid label leakage.
[37]:
test_train = pd.DataFrame({"customer_id": [6, 7, 8, 9, 10], "age": [20, 25, 55, 22, 35], "gender": ["f", "m", "m", "m", "m"], "signup_date": pd.date_range('2014-01-01 01:41:50', periods=5, freq='25min'), "labels": [True, False, False, True, True]}) es_test_data = ft.EntitySet(id="customer_test_data") es_test_data = es_test_data.add_dataframe(dataframe_name="customers", dataframe=test_train, index="customer_id", time_index="signup_date") # Use the feature definitions from earlier feature_matrix_enc_test = ft.calculate_feature_matrix(features=features_enc, entityset=es_test_data) feature_matrix_enc_test.head()
Check out the Modeling section for an example of using the encoded matrix with sklearn.
If your data has timestamps, the best way to prevent label leakage is to use a list of cutoff times, which specify the last point in time data is allowed to be used for each row in the resulting feature matrix. To use cutoff times, you need to set a time index for each time sensitive DataFrame in your entity set.
Tip: Even if your data doesn’t have time stamps, you could add a column with dummy timestamps that can be used by Featuretools as time index.
When you call ft.dfs, you can provide a DataFrame of cutoff times like this:
[38]:
cutoff_times = pd.DataFrame({"customer_id": [1, 2, 3, 4, 5], "time": pd.date_range('2014-01-01 01:41:50', periods=5, freq='25min')}) cutoff_times.head()
[39]:
train_test_data = pd.DataFrame({"customer_id": [1, 2, 3, 4, 5], "age": [20, 25, 55, 22, 35], "gender": ["f", "m", "m", "m", "m"], "signup_date": pd.date_range('2010-01-01 01:41:50', periods=5, freq='25min')}) es_train_test_data = ft.EntitySet(id="customer_train_test_data") es_train_test_data = es_train_test_data.add_dataframe(dataframe_name="customers", dataframe=train_test_data, index="customer_id", time_index="signup_date") feature_matrix_train_test, features = ft.dfs(entityset=es_train_test_data, target_dataframe_name="customers", cutoff_time=cutoff_times, cutoff_time_in_index=True) feature_matrix_train_test.head()
Above, we have created a feature matrix that uses cutoff times to avoid label leakage. We could also encode this feature matrix using ft.encode_features.
ft.encode_features
There are 2 ways to pass primitives to DFS: the primitive object, or a string of the primitive name.
We will use the Transform primitive called TimeSincePrevious to illustrate the differences.
TimeSincePrevious
First, let’s use the string of primitive name.
[40]:
es = ft.demo.load_mock_customer(return_entityset=True)
[41]:
feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", agg_primitives=[], trans_primitives=["time_since_previous"]) feature_matrix
Now, let’s use the primitive object.
[42]:
from featuretools.primitives import TimeSincePrevious feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", agg_primitives=[], trans_primitives=[TimeSincePrevious]) feature_matrix
As we can see above, the feature matrix is the same.
However, if we need to modify controllable parameters in the primitive, we should use the primitive object. For instance, let’s make TimeSincePrevious return units of hours (the default is in seconds).
[43]:
from featuretools.primitives import TimeSincePrevious time_since_previous_in_hours = TimeSincePrevious(unit='hours') feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", agg_primitives=[], trans_primitives=[time_since_previous_in_hours]) feature_matrix
You may wish to select a subset of your features based on some attributes.
Let’s say you wanted to select features that had the string amount in its name. You can check for this by using the get_name function on the feature definitions.
amount
get_name
[44]:
es = ft.demo.load_mock_customer(return_entityset=True) feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", features_only=True) features_with_amount = [] for x in feature_defs: if 'amount' in x.get_name(): features_with_amount.append(x) features_with_amount[0:5]
[<Feature: MAX(transactions.amount)>, <Feature: MEAN(transactions.amount)>, <Feature: MIN(transactions.amount)>, <Feature: SKEW(transactions.amount)>, <Feature: STD(transactions.amount)>]
You might also want to only select features that are aggregation features.
[45]:
from featuretools import AggregationFeature features_only_aggregations = [] for x in feature_defs: if type(x) == AggregationFeature: features_only_aggregations.append(x) features_only_aggregations[0:5]
[<Feature: COUNT(sessions)>, <Feature: MODE(sessions.device)>, <Feature: NUM_UNIQUE(sessions.device)>, <Feature: COUNT(transactions)>, <Feature: MAX(transactions.amount)>]
Also, you might only want to select features that are calculated at a certain depth. You can do this by using the get_depth function.
get_depth
[46]:
features_only_depth_2 = [] for x in feature_defs: if x.get_depth() == 2: features_only_depth_2.append(x) features_only_depth_2[0:5]
[<Feature: MAX(sessions.COUNT(transactions))>, <Feature: MAX(sessions.MEAN(transactions.amount))>, <Feature: MAX(sessions.MIN(transactions.amount))>, <Feature: MAX(sessions.NUM_UNIQUE(transactions.product_id))>, <Feature: MAX(sessions.SKEW(transactions.amount))>]
Finally, you might only want features that return a certain type. You can do this by using the column_schema attribute. For more information on working with column schemas, take a look at Transitioning from Variables to Woodwork.
column_schema
[47]:
features_only_numeric = [] for x in feature_defs: if 'numeric' in x.column_schema.semantic_tags: features_only_numeric.append(x) features_only_numeric[0:5]
[<Feature: COUNT(sessions)>, <Feature: NUM_UNIQUE(sessions.device)>, <Feature: COUNT(transactions)>, <Feature: MAX(transactions.amount)>, <Feature: MEAN(transactions.amount)>]
Once you have your specific feature list, you can use ft.calculate_feature_matrix to generate a feature matrix for only those features.
For our example, let’s use the features with only the string amount in its name.
[48]:
feature_matrix = ft.calculate_feature_matrix(entityset=es, features=features_with_amount) # change to your specific feature list feature_matrix.head()
5 rows × 37 columns
Above, notice how all the column names for our feature matrix contain the string amount.
Sometimes, you might want to create features that are conditioned on a second value before it is calculated. This extra filter is called a “where clause”. You can create these features using the using the interesting_values of a column.
If you have categorical columns in your EntitySet, you can use add_interesting_values. This function will find interesting values for your categorical columns, which can then be used to generate “where” clauses.
add_interesting_values
First, let’s create our EntitySet.
[49]:
Now we can add the interesting values for the categorical column.
[50]:
es.add_interesting_values()
Now we can run DFS with the where_primitives argument to define which primitives to apply with where clauses. In this case, let’s use the primitive count. For this to work, the primitive count must be present in both agg_primitives and where_primitives.
where_primitives
count
agg_primitives
[51]:
We have now created some useful features. One example of a useful feature is the COUNT(sessions WHERE device = tablet). This feature tells us how many sessions a customer completed on a tablet.
COUNT(sessions WHERE device = tablet)
[52]:
feature_matrix[["COUNT(sessions WHERE device = tablet)"]]
You might curious to know the difference between the primitive groups. Let’s review the differences between transform, groupby transform, and aggregation primitives.
First, let’s create a simple EntitySet.
[53]:
import pandas as pd import featuretools as ft df = pd.DataFrame({ "id": [1, 2, 3, 4, 5, 6], "time_index": pd.date_range("1/1/2019", periods=6, freq="D"), "group": ["a", "a", "a", "a", "a", "a"], "val": [5, 1, 10, 20, 6, 23], }) es = ft.EntitySet() es = es.add_dataframe(dataframe_name="observations", dataframe=df, index="id", time_index="time_index") es = es.normalize_dataframe(base_dataframe_name="observations", new_dataframe_name="groups", index="group") es.plot()
After calling normalize_dataframe, the column “group” has the semantic tag “foreign_key” because it identifies another DataFrame. Alternatively, it could be set using the semantic_tags parameter when we first call es.add_dataframe().
semantic_tags
es.add_dataframe()
The cum_sum primitive calculates the running sum in list of numbers.
[54]:
from featuretools.primitives import CumSum cum_sum = CumSum() cum_sum([1, 2, 3, 4, 5]).tolist()
[1, 3, 6, 10, 15]
If we apply it using the trans_primitives argument it will calculate it over the entire observations DataFrame like this:
trans_primitives
[55]:
feature_matrix, feature_defs = ft.dfs(target_dataframe_name="observations", entityset=es, agg_primitives=[], trans_primitives=["cum_sum"], groupby_trans_primitives=[]) feature_matrix
If we apply it using groupby_trans_primitives, then DFS will first group by any foreign key columns before applying the transform primitive. As a result, we get the cumulative sum by group.
groupby_trans_primitives
[56]:
feature_matrix, feature_defs = ft.dfs(target_dataframe_name="observations", entityset=es, agg_primitives=[], trans_primitives=[], groupby_trans_primitives=["cum_sum"]) feature_matrix
Finally, there is also the aggregation primitive “sum”. If we use sum, it will calculate the sum for the group at the cutoff time for each row. Because we didn’t specify a cutoff time it will use all the data for each group for each row.
[57]:
feature_matrix, feature_defs = ft.dfs(target_dataframe_name="observations", entityset=es, agg_primitives=["sum"], trans_primitives=[], cutoff_time_in_index=True, groupby_trans_primitives=[]) feature_matrix
If we set the cutoff time of each row to be the time index, then use sum as an aggregation primitive, the result is the same as cum_sum. (Though the order is different in the displayed dataframe).
[58]:
cutoff_time = df[["id", "time_index"]] cutoff_time
[59]:
feature_matrix, feature_defs = ft.dfs(target_dataframe_name="observations", entityset=es, agg_primitives=["sum"], trans_primitives=[], groupby_trans_primitives=[], cutoff_time_in_index=True, cutoff_time=cutoff_time) feature_matrix
You can do featuretools.list_primitives() to get all the primitive in Featuretools. It will return a DataFrame with the names, type, and description of the primitives, and if the primitive can be used with entitysets created from Dask dataframes. You can also visit primitives.featurelabs.com to obtain a list of all available primitives.
featuretools.list_primitives()
[60]:
df_primitives = ft.list_primitives() df_primitives.head()
[61]:
df_primitives.tail()
Support for Dask EntitySets is still in Beta - if you encounter any errors using this approach, please let us know by creating a new issue on Github.
When creating a feature matrix from a Dask EntitySet, only certain primitives can be used. Computation of certain features is quite expensive in a distributed environment, and as a result only a subset of Featuretools primitives are currently supported when using a Dask EntitySet.
The table returned by featuretools.list_primitives() will contain a column labeled dask_compatible. Any primitive that has a value of True in this column can be used safely when computing a feature matrix from a Dask EntitySet.
dask_compatible
True
There are a few primitives in Featuretools that make some time-based calculation. These include TimeSince, TimeSincePrevious, TimeSinceLast, TimeSinceFirst.
TimeSince, TimeSincePrevious, TimeSinceLast, TimeSinceFirst
You can change the units from the default seconds to any valid time unit, by doing the following:
[62]:
from featuretools.primitives import TimeSince, TimeSincePrevious, TimeSinceLast, TimeSinceFirst time_since = TimeSince(unit="minutes") time_since_previous = TimeSincePrevious(unit="hours") time_since_last = TimeSinceLast(unit="days") time_since_first = TimeSinceFirst(unit="years") es = ft.demo.load_mock_customer(return_entityset=True) feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", agg_primitives=[time_since_last, time_since_first], trans_primitives=[time_since, time_since_previous])
Above, we changed the units to the following: - minutes for TimeSince - hours for TimeSincePrevious - days for TimeSinceLast - years for TimeSinceFirst.
TimeSince
TimeSinceLast
TimeSinceFirst
Now we can see that our feature matrix contains multiple features where the units for the TimeSince primitives are changed.
[63]:
There are now features where time unit is different from the default of seconds, such as TIME_SINCE_LAST(sessions.session_start, unit=days), and TIME_SINCE_FIRST(sessions.session_start, unit=years).
TIME_SINCE_LAST(sessions.session_start, unit=days)
TIME_SINCE_FIRST(sessions.session_start, unit=years)
You might be wondering how to properly use your train & test data with Featuretools, and sklearn’s train_test_split. There are a few things you must do to ensure accuracy with this workflow.
Let’s imagine we have a Dataframes for our train data, with the labels.
[64]:
train_data = pd.DataFrame({"customer_id": [1, 2, 3, 4, 5], "age": [20, 25, 55, 22, 35], "gender": ["f", "m", "m", "m", "m"], "signup_date": pd.date_range('2010-01-01 01:41:50', periods=5, freq='25min'), "labels": [False, True, True, False, False]}) train_data.head()
Now we can create our EntitySet for the train data, and create our features. To prevent label leakage, we will use cutoff times (see earlier question).
[65]:
es_train_data = ft.EntitySet(id="customer_data") es_train_data = es_train_data.add_dataframe(dataframe_name="customers", dataframe=train_data, index="customer_id") cutoff_times = pd.DataFrame({"customer_id": [1, 2, 3, 4, 5], "time": pd.date_range('2014-01-01 01:41:50', periods=5, freq='25min')}) feature_matrix_train, features = ft.dfs(entityset=es_train_data, target_dataframe_name="customers", cutoff_time=cutoff_times, cutoff_time_in_index=True) feature_matrix_train.head()
We will also encode our feature matrix to compatible for machine learning algorithms.
[66]:
feature_matrix_train_enc, feature_enc = ft.encode_features(feature_matrix_train, features) feature_matrix_train_enc.head()
[67]:
from sklearn.model_selection import train_test_split X = feature_matrix_train_enc.drop(['labels'], axis=1) y = feature_matrix_train_enc['labels'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
Now you can use the encoded feature matrix with sklearn’s train_test_split. This will allow you to train your model, and tune your parameters.
You might be wondering what happens when categorical columns are encoded with your training and testing data. You might be curious to know what happens if the train data has a categorical column that is not present in the testing data.
Let’s explore a simple example to see what happens during the encoding process.
[68]:
train_data = pd.DataFrame({ "customer_id": [1, 2, 3, 4, 5], "product_purchased": ["coke zero", "car", "toothpaste", "coke zero", "car"], }) es_train = ft.EntitySet(id="customer_data") es_train = es_train.add_dataframe( dataframe_name="customers", dataframe=train_data, index="customer_id", logical_types={'product_purchased': ww.logical_types.Categorical}, ) feature_matrix_train, features = ft.dfs(entityset=es_train, target_dataframe_name='customers') feature_matrix_train
We will use ft.encode_features to properly encode the product_purchased column.
product_purchased
[69]:
feature_matrix_train_encoded, features_encoded = ft.encode_features(feature_matrix_train, features) feature_matrix_train_encoded.head()
Now lets imagine we have some test data that has doesn’t have one of the categorical values (toothpaste). Also, the test data has a value that wasn’t present in the train data (water).
[70]:
test_data = pd.DataFrame({"customer_id": [6, 7, 8, 9, 10], "product_purchased": ["coke zero", "car", "coke zero", "coke zero", "water"]}) es_test = ft.EntitySet(id="customer_data") es_test = es_test.add_dataframe(dataframe_name="customers", dataframe=test_data, index="customer_id") feature_matrix_test = ft.calculate_feature_matrix(entityset=es_test, features=features_encoded) feature_matrix_test.head()
As seen above, we were able to successfully handle the encoding, and deal with the following complications: - toothpaste was present in the training data but not present in the testing data - water was present in the test data but not present in the training data.
You may be trying to create your EntitySet, and run into this error.
IndexError: Index column must be unique
This is because each dataframe in your EntitySet needs a unique index.
[71]:
product_df = pd.DataFrame({'id': [1, 2, 3, 4, 4], 'rating': [3.5, 4.0, 4.5, 1.5, 5.0]}) product_df
Notice how the id column has a duplicate index of 4. If you try to add this dataframe to the EntitySet, you will run into the following error.
id
4
es = ft.EntitySet(id="product_data") es = es.add_dataframe(dataframe_name="products", dataframe=product_df, index="id")
--------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-78-854fbaf207f8> in <module> 1 es = ft.EntitySet(id="product_data") ----> 2 es = es.add_dataframe(dataframe_name="products", 3 dataframe=product_df, 4 index="id") ~/Code/featuretools/featuretools/entityset/entityset.py in add_dataframe(self, dataframe, dataframe_name, index, logical_types, semantic_tags, make_index, time_index, secondary_time_index, already_sorted) 625 index_was_created, index, dataframe = _get_or_create_index(index, make_index, dataframe) 626 --> 627 dataframe.ww.init(name=dataframe_name, 628 index=index, 629 time_index=time_index, /usr/local/Caskroom/miniconda/base/envs/featuretools/lib/python3.8/site-packages/woodwork/table_accessor.py in init(self, index, time_index, logical_types, already_sorted, schema, validate, use_standard_tags, **kwargs) 94 """ 95 if validate: ---> 96 _validate_accessor_params(self._dataframe, index, time_index, logical_types, schema, use_standard_tags) 97 if schema is not None: 98 self._schema = schema /usr/local/Caskroom/miniconda/base/envs/featuretools/lib/python3.8/site-packages/woodwork/table_accessor.py in _validate_accessor_params(dataframe, index, time_index, logical_types, schema, use_standard_tags) 877 # We ignore these parameters if a schema is passed 878 if index is not None: --> 879 _check_index(dataframe, index) 880 if logical_types: 881 _check_logical_types(dataframe.columns, logical_types) /usr/local/Caskroom/miniconda/base/envs/featuretools/lib/python3.8/site-packages/woodwork/table_accessor.py in _check_index(dataframe, index) 903 # User specifies an index that is in the dataframe but not unique 904 # Does not check for Dask as Dask does not support is_unique --> 905 raise IndexError('Index column must be unique') 906 907 IndexError: Index column must be unique
To fix the above error, you can do one of the following solutions:
Solution #1 - You can create a unique index on your Dataframe.
[72]:
product_df = pd.DataFrame({'id': [1, 2, 3, 4, 5], 'rating': [3.5, 4.0, 4.5, 1.5, 5.0]}) product_df
Notice how we now have a unique index column called id.
[73]:
es = es.add_dataframe(dataframe_name="products", dataframe=product_df, index="id") es
Entityset: transactions DataFrames: transactions [Rows: 500, Columns: 6] products [Rows: 5, Columns: 2] sessions [Rows: 35, Columns: 5] customers [Rows: 5, Columns: 5] Relationships: transactions.product_id -> products.product_id transactions.session_id -> sessions.session_id sessions.customer_id -> customers.customer_id
As seen above, we can now create our DataFrame for our EntitySet without an error by creating a unique index in our Dataframe.
Solution #2 - Set make_index to True in your call to add_dataframe to create a new index on that data - make_index creates a unique index for each row by just looking at what number the row is, in relation to all the other rows.
make_index
[74]:
product_df = pd.DataFrame({'id': [1, 2, 3, 4, 4], 'rating': [3.5, 4.0, 4.5, 1.5, 5.0]}) es = ft.EntitySet(id="product_data") es = es.add_dataframe(dataframe_name="products", dataframe=product_df, index="product_id", make_index=True) es['products']
As seen above, we created our dataframe for our EntitySet without an error using the make_index argument.
If you are using a training window, and you haven’t set a last_time_index for your dataframe, you will get this warning. The training window attribute in Featuretools limits the amount of past data that can be used while calculating a particular feature vector.
last_time_index
You can add the last_time_index to all dataframes automatically by calling your_entityset.add_last_time_indexes() after you create your EntitySet. This will remove the warning.
your_entityset.add_last_time_indexes()
[75]:
es = ft.demo.load_mock_customer(return_entityset=True) es.add_last_time_indexes()
Now we can run DFS without getting the warning.
[76]:
cutoff_times = pd.DataFrame() cutoff_times['customer_id'] = [1, 2, 3, 1] cutoff_times['time'] = pd.to_datetime(['2014-1-1 04:00', '2014-1-1 05:00', '2014-1-1 06:00', '2014-1-1 08:00']) cutoff_times['label'] = [True, True, False, True] feature_matrix, feature_defs = ft.dfs(entityset=es, target_dataframe_name="customers", cutoff_time=cutoff_times, cutoff_time_in_index=True, training_window="1 hour")
The time_index is when the instance was first known.
The last_time_index is when the instance appears for the last time.
For example, a customer’s session has multiple transactions which can happen at different points in time. If we are trying to count the number of sessions a user has in a given time period, we often want to count all the sessions that had any transaction during the training window. To accomplish this, we need to not only know when a session starts (time_index), but also when it ends (last_time_index). The last time that an instance appears in the data is stored as the last_time_index of a dataframe.
Once the last_time_index has been set, Featuretools will check to see if the last_time_index is after the start of the training window. That, combined with the cutoff time, allows DFS to discover which data is relevant for a given training window.
Google Colab, by default, has Featuretools 0.4.1 installed. You may run into issues following our newest guides, or latest documentation while using an older version of Featuretools. Therefore, we suggest you upgrade to the latest featuretools version by doing the following in your notebook in Google Colab:
0.4.1
!pip install -U featuretools
You may need to Restart the runtime by doing Runtime -> Restart Runtime. You can check latest Featuretools version by doing following:
import featuretools as ft print(ft.__version__)
You should see a version greater than 0.4.1