NOTICE
The upcoming release of Featuretools 1.0.0 contains several breaking changes. Users are encouraged to test this version prior to release by installing from GitHub:
pip install https://github.com/alteryx/featuretools/archive/woodwork-integration.zip
For details on migrating to the new version, refer to Transitioning to Featuretools Version 1.0. Please report any issues in the Featuretools GitHub repo or by messaging in Alteryx Open Source Slack.
Feature primitives are the building blocks of Featuretools. They define individual computations that can be applied to raw datasets to create new features. Because a primitive only constrains the input and output data types, they can be applied across datasets and can stack to create new calculations.
The space of potential functions that humans use to create a feature is expansive. By breaking common feature engineering calculations down into primitive components, we are able to capture the underlying structure of the features humans create today.
A primitive only constrains the input and output data types. This means they can be used to transfer calculations known in one domain to another. Consider a feature which is often calculated by data scientists for transactional or event logs data: average time between events. This feature is incredibly valuable in predicting fraudulent behavior or future customer engagement.
DFS achieves the same feature by stacking two primitives "time_since_previous" and "mean"
"time_since_previous"
"mean"
[1]:
import featuretools as ft es = ft.demo.load_mock_customer(return_entityset=True) feature_defs = ft.dfs( entityset=es, target_entity="customers", agg_primitives=["mean"], trans_primitives=["time_since_previous"], features_only=True, ) feature_defs
[<Feature: zip_code>, <Feature: MEAN(transactions.amount)>, <Feature: TIME_SINCE_PREVIOUS(join_date)>, <Feature: MEAN(sessions.MEAN(transactions.amount))>, <Feature: MEAN(sessions.TIME_SINCE_PREVIOUS(session_start))>]
Note
When dfs is called with features_only=True, only feature definitions are returned as output. By default this parameter is set to False. This parameter is used quickly inspect the feature definitions before the spending time calculating the feature matrix.
dfs
features_only=True
False
A second advantage of primitives is that they can be used to quickly enumerate many interesting features in a parameterized way. This is used by Deep Feature Synthesis to get several different ways of summarizing the time since the previous event.
[2]:
feature_matrix, feature_defs = ft.dfs( entityset=es, target_entity="customers", agg_primitives=["mean", "max", "min", "std", "skew"], trans_primitives=["time_since_previous"], ) feature_matrix[[ "MEAN(sessions.TIME_SINCE_PREVIOUS(session_start))", "MAX(sessions.TIME_SINCE_PREVIOUS(session_start))", "MIN(sessions.TIME_SINCE_PREVIOUS(session_start))", "STD(sessions.TIME_SINCE_PREVIOUS(session_start))", "SKEW(sessions.TIME_SINCE_PREVIOUS(session_start))", ]]
In the example above, we use two types of primitives.
Aggregation primitives: These primitives take related instances as an input and output a single value. They are applied across a parent-child relationship in an entity set. E.g: "count", "sum", "avg_time_between".
"count"
"sum"
"avg_time_between"
Transform primitives: These primitives take one or more variables from an entity as an input and output a new variable for that entity. They are applied to a single entity. E.g: "hour", "time_since_previous", "absolute".
"hour"
"absolute"
The above graphs were generated using the graph_feature function. These feature lineage graphs help to visually show how primitives were stacked to generate a feature.
graph_feature
For a DataFrame that lists and describes each built-in primitive in Featuretools, call ft.list_primitives(). In addition, a list of all available primitives can be obtained by visiting primitives.featurelabs.com.
ft.list_primitives()
[3]:
ft.list_primitives().head(5)
The library of primitives in Featuretools is constantly expanding. Users can define their own primitive using the APIs below. To define a primitive, a user will
Specify the type of primitive Aggregation or Transform Define the input and output data types Write a function in python to do the calculation Annotate with attributes to constrain how it is applied
Specify the type of primitive Aggregation or Transform
Aggregation
Transform
Define the input and output data types
Write a function in python to do the calculation
Annotate with attributes to constrain how it is applied
Once a primitive is defined, it can stack with existing primitives to generate complex patterns. This enables primitives known to be important for one domain to automatically be transfered to another.
[4]:
from featuretools.primitives import AggregationPrimitive, TransformPrimitive from featuretools.tests.testing_utils import make_ecommerce_entityset from featuretools.variable_types import Datetime, NaturalLanguage, Numeric import pandas as pd
[5]:
class Absolute(TransformPrimitive): name = 'absolute' input_types = [Numeric] return_type = Numeric def get_function(self): def absolute(column): return abs(column) return absolute
Above, we created a new transform primitive that can be used with Deep Feature Synthesis by deriving a new primitive class using TransformPrimitive as a base and overriding get_function() to return a function that calculates the feature. Additionally, we set the input data types that the primitive applies to and the return data type.
TransformPrimitive
get_function()
Similarly, we can make a new aggregation primitive using AggregationPrimitive.
AggregationPrimitive
[6]:
class Maximum(AggregationPrimitive): name = 'maximum' input_types = [Numeric] return_type = Numeric def get_function(self): def maximum(column): return max(column) return maximum
Because we defined an aggregation primitive, the function takes in a list of values but only returns one.
Now that we’ve defined two primitives, we can use them with the dfs function as if they were built-in primitives.
[7]:
feature_matrix, feature_defs = ft.dfs( entityset=es, target_entity="sessions", agg_primitives=[Maximum], trans_primitives=[Absolute], max_depth=2, ) feature_matrix.head(5)[[ "customers.MAXIMUM(transactions.amount)", "MAXIMUM(transactions.ABSOLUTE(amount))", ]]
Here we define a transform primitive, WordCount, which counts the number of words in each row of an input and returns a list of the counts.
WordCount
[8]:
class WordCount(TransformPrimitive): ''' Counts the number of words in each row of the column. Returns a list of the counts for each row. ''' name = 'word_count' input_types = [NaturalLanguage] return_type = Numeric def get_function(self): def word_count(column): word_counts = [] for value in column: words = value.split(None) word_counts.append(len(words)) return word_counts return word_count
[9]:
es = make_ecommerce_entityset() feature_matrix, features = ft.dfs( entityset=es, target_entity="sessions", agg_primitives=["sum", "mean", "std"], trans_primitives=[WordCount], ) feature_matrix[[ "customers.WORD_COUNT(favorite_quote)", "STD(log.WORD_COUNT(comments))", "SUM(log.WORD_COUNT(comments))", "MEAN(log.WORD_COUNT(comments))", ]]
By adding some aggregation primitives as well, Deep Feature Synthesis was able to make four new features from one new primitive.
If a primitive requires multiple features as input, input_types has multiple elements, eg [Numeric, Numeric] would mean the primitive requires two Numeric features as input. Below is an example of a primitive that has multiple input features.
input_types
[Numeric, Numeric]
[10]:
class MeanSunday(AggregationPrimitive): ''' Finds the mean of non-null values of a feature that occurred on Sundays ''' name = 'mean_sunday' input_types = [Numeric, Datetime] return_type = Numeric def get_function(self): def mean_sunday(numeric, datetime): days = pd.DatetimeIndex(datetime).weekday.values df = pd.DataFrame({'numeric': numeric, 'time': days}) return df[df['time'] == 6]['numeric'].mean() return mean_sunday
[11]:
feature_matrix, features = ft.dfs( entityset=es, target_entity="sessions", agg_primitives=[MeanSunday], trans_primitives=[], max_depth=1, ) feature_matrix[[ "MEAN_SUNDAY(log.value, datetime)", "MEAN_SUNDAY(log.value_2, datetime)", ]]