NOTICE
The upcoming release of Featuretools 1.0.0 contains several breaking changes. Users are encouraged to test this version prior to release by installing from GitHub:
pip install https://github.com/alteryx/featuretools/archive/woodwork-integration.zip
For details on migrating to the new version, refer to Transitioning to Featuretools Version 1.0. Please report any issues in the Featuretools GitHub repo or by messaging in Alteryx Open Source Slack.
Featuretools relies on having consistent typing across the creation of EntitySets, Primitives, Features, and feature matrices. Previously, Featuretools used its own type system that contained objects called Variables. Now and moving forward, Featuretools will use an external data typing library for its typing: Woodwork.
Understanding the Woodwork types that exist and how Featuretools uses Woodwork’s type system will allow users to: - build EntitySets that best represent their data - understand the possible input and return types for Featuretools’ Primitives - understand what features will get generated from a given set of data and primitives.
Read the Understanding Woodwork Logical Types and Semantic Tags guide for an in-depth walkthrough of the available Woodwork types that are outlined below.
For users that are familiar with the old Variable objects, the Transitioning to Featuretools Version 1.0 guide will be useful for converting Variable types to Woodwork types.
Variable
Physical types define how the data in a Woodwork DataFrame is stored on disk or in memory. You might also see the physical type for a column referred to as the column’s dtype.
dtype
Knowing a Woodwork DataFrame’s physical types is important because Pandas, Dask, and Koalas rely on these types when performing DataFrame operations. Each Woodwork LogicalType class has a single physical type associated with it.
LogicalType
Logical types add additional information about how data should be interpreted or parsed beyond what can be contained in a physical type. In fact, multiple logical types have the same physical type, each imparting a different meaning that’s not contained in the physical type alone.
In Featuretools, a column’s logical type informs how data is read into an EntitySet and how it gets used down the line in Deep Feature Synthesis.
Woodwork provides many different logical types, which can be seen with the list_logical_types function.
list_logical_types
[1]:
import featuretools as ft ft.list_logical_types()
Featuretools will perform type inference to assign logical types to the data in EntitySets if none are provided, but it is also possible to specify which logical types should be set for any column (provided that the data in that column is compatible with the logical type).
To learn more about how logical types are used in EntitySets, see the Creating EntitySets guide.
To learn more about setting logical types directly on a DataFrame, see the Woodwork guide on working with Logical Types.
Semantic tags provide additional information to columns about the meaning or potential uses of data. Columns can have many or no semantic tags. Some tags are added by Woodwork, some are added by Featuretools, and users can add additional tags as they see fit.
To learn more about setting semantic tags directly on a DataFrame, see the Woodwork guide on working with Semantic Tags.
Woodwork will add certain semantic tags to columns at initialization. These can be standard tags that may be associated with different sets of logical types or index tags. There are also tags that users can add to confer a suggested meaning to columns in Woodwork.
To get a list of these tags, you can use the list_semantic_tags function.
list_semantic_tags
[2]:
ft.list_semantic_tags()
Above we see the semantic tags that are defined within Woodwork. These tags inform how Featuretools is able to interpret data, an example of which can be seen in the Age primitive, which requires that the date_of_birth semantic tag be present on a column.
Age
date_of_birth
The date_of_birth tag will not get automatically added by Woodwork, so in order for Featuretools to be able to use the Age primitive, the date_of_birth tag must be manually added to any columns to which it applies.
Just like Woodwork specifies semantic tags internally, Featuretools also defines a few tags of its own that allow the full set of Features to be generated. These tags have specific meanings when they are present on a column.
'last_time_index' - added by Featuretools to the last time index column of a DataFrame. Indicates that this column has been created by Featuretools.
'last_time_index'
'foreign_key' - used to indicate that this column is the child column of a relationship, meaning that this column is related to a corresponding index column of another dataframe in the EntitySet.
'foreign_key'
Now that we’ve described the elements that make up Woodwork’s type system, lets see them in action in Featuretools.
For more information on building EntitySets using Woodwork, see the EntitySet guide.
Let’s look at the Woodwork typing information as it’s stored in a demo EntitySet of retail data:
[3]:
es = ft.demo.load_retail() es
Entityset: demo_retail_data DataFrames: order_products [Rows: 401604, Columns: 8] products [Rows: 3684, Columns: 4] orders [Rows: 22190, Columns: 6] customers [Rows: 4372, Columns: 3] Relationships: order_products.product_id -> products.product_id order_products.order_id -> orders.order_id orders.customer_name -> customers.customer_name
Woodwork typing information is not stored in the EntitySet object, but rather is stored in the individual DataFrames that make up the EntitySet. To look at the Woodwork typing information, we first select a single DataFrame from the EntitySet, and then access the Woodwork information via the ww namespace:
ww
[4]:
df = es['products'] df.head()
[5]:
df.ww
Notice how the three columns showing this DataFrame’s typing information are the three elements of typing information outlined at the beginning of this guide. To reiterate: By defining physical types, logical types, and semantic tags for each column in a DataFrame, we’ve defined a DataFrame’s Woodwork schema, and with it, we can gain an understanding of the contents of each column.
This column-specific typing information that exists for every column in every DataFrame in an EntitySet is an integral part of Deep Feature Synthesis’ ability to generate features for an EntitySet.
As the units of computation in Featuretools, Primitives need to be able to specify the input types that they allow as well as have a predictable return type. For an in-depth explanation of Primitives in Featuretools, see the Feature Primitives guide. Here, we’ll look at how the Woodwork types come together into a ColumnSchema object to describe Primitive input and return types.
ColumnSchema
Below is a Woodwork ColumnSchema that we’ve obtained from the 'product_id' column in the products DataFrame in the retail EntitySet.
'product_id'
products
[6]:
products_df = es['products'] product_ids_series = products_df.ww['product_id'] column_schema = product_ids_series.ww.schema column_schema
<ColumnSchema (Logical Type = Categorical) (Semantic Tags = ['index'])>
This combination of logical type and semantic tag typing information is a ColumnSchema. In the case above, the ColumnSchema describes the type definition for a single column of data.
Notice that there is no physical type in a ColumnSchema. This is because a ColumnSchema is a collection of Woodwork types that doesn’t have any data tied to it and therefore has no physical representation. Because a ColumnSchema object is not tied to any data, it can also be used to describe a type space into which other columns may or may not fall.
This flexibility of the ColumnSchema class allows ColumnSchema objects to be used both as type definitions for every column in an EntitySet as well as input and return type spaces for every Primitive in Featuretools.
Let’s look at a different column in a different DataFrame to see how this works:
[7]:
order_products_df = es['order_products'] order_products_df.head()
[8]:
quantity_series = order_products_df.ww['quantity'] column_schema = quantity_series.ww.schema column_schema
<ColumnSchema (Logical Type = Integer) (Semantic Tags = ['numeric'])>
The ColumnSchema above has been pulled from the 'quantity' column in the order_products DataFrame in the retail EntitySet. This is a type definition.
'quantity'
order_products
If we look at the Woodwork typing information for the order_products DataFrame, we can see that there are several columns that will have similar ColumnSchema type definitions. If we wanted to describe subsets of those columns, we could define several ColumnSchema type spaces
[9]:
es['order_products'].ww
Below are several ColumnSchemas that all would include our quantity column, but each of them describes a different type space. These ColumnSchemas get more restrictive as we go down:
quantity
No restrictions have been placed; any column falls into this definition. This would include the whole DataFrame.
[10]:
from woodwork.column_schema import ColumnSchema ColumnSchema()
<ColumnSchema>
An example of a Primitive with this ColumnSchema as its input type is the IsNull transform primitive.
IsNull
Only columns with the numeric tag apply. This can include Double, Integer, and Age logical type columns as well. It will not include the index column which, despite containing integers, has had its standard tags replaced by the 'index' tag.
numeric
index
'index'
[11]:
ColumnSchema(semantic_tags={'numeric'})
<ColumnSchema (Semantic Tags = ['numeric'])>
[12]:
df = es['order_products'].ww.select(include='numeric') df.ww
And example of a Primitive with this ColumnSchema as its input type is the Mean aggregation primitive.
Mean
Only columns with logical type of Integer are included in this definition. Does not require the numeric tag, so an index column (which has its standard tags removed) would still apply.
Integer
[13]:
from woodwork.logical_types import Integer ColumnSchema(logical_type=Integer)
<ColumnSchema (Logical Type = Integer)>
[14]:
df = es['order_products'].ww.select(include='Integer') df.ww
The column must have logical type Integer and have the numeric semantic tag, excluding index columns.
[15]:
ColumnSchema(logical_type=Integer, semantic_tags={'numeric'})
[16]:
df = es['order_products'].ww.select(include='numeric') df = df.ww.select(include='Integer') df.ww
In this way, a ColumnSchema can define a type space under which columns in a Woodwork DataFrame can fall. This is how Featuretools determines which columns in a DataFrame are valid for a Primitive in building Features during DFS.
Each Primitive has input_types and a return_type that are described by a Woodwork ColumnSchema. Every DataFrame in an EntitySet has Woodwork initialized on it. This means that when an EntitySet is passed into DFS, Featuretools can select the relevant columns in the DataFrame that are valid for the Primitive’s input_types. We then get a Feature that has a column_schema property that indicates what that Feature’s typing definition is in a way that lets DFS stack features on top of one another.
input_types
return_type
column_schema
In this way, Featuretools is able to leverage the base unit of Woodwork typing information, the ColumnSchema, and use it in concert with an EntitySet of Woodwork DataFrames in order to build Features with Deep Feature Synthesis.