{ "cells": [ { "cell_type": "markdown", "id": "87378ca2", "metadata": {}, "source": [ "# Using Koalas EntitySets (BETA)" ] }, { "cell_type": "raw", "id": "ac77ea82", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. note::\n", " Support for Koalas EntitySets is still in Beta. While the key functionality has been implemented, development is ongoing to add the remaining functionality.\n", " \n", " All planned improvements to the Featuretools/Koalas integration are `documented on Github `_. If you see an open issue that is important for your application, please let us know by upvoting or commenting on the issue. If you encounter any errors using Koalas dataframes in EntitySets, or find missing functionality that does not yet have an open issue, please create a `new issue on Github `_." ] }, { "cell_type": "markdown", "id": "59241b3b", "metadata": {}, "source": [ "Creating a feature matrix from a very large dataset can be problematic if the underlying pandas dataframes that make up the EntitySet cannot easily fit in memory. To help get around this issue, Featuretools supports creating ``EntitySet`` objects from Koalas dataframes. A Koalas ``EntitySet`` can then be passed to ``featuretools.dfs`` or ``featuretools.calculate_feature_matrix`` to create a feature matrix, which will be returned as a Koalas dataframe. In addition to working on larger than memory datasets, this approach also allows users to take advantage of the parallel and distributed processing capabilities offered by Koalas and Spark.\n", "\n", "This guide will provide an overview of how to create a Koalas ``EntitySet`` and then generate a feature matrix from it. If you are already familiar with creating a feature matrix starting from pandas dataframes, this process will seem quite familiar, as there are no differences in the process. There are, however, some limitations when using Koalas dataframes, and those limitations are reviewed in more detail below.\n", "\n", "## Creating EntitySets\n", "\n", "Koalas ``EntitySets`` require Koalas and PySpark. Both can be installed directly with ``pip install featuretools[koalas]``. Java is also required for PySpark and may need to be installed, see [the Spark documentation](https://spark.apache.org/docs/latest/index.html) for more details. We will create a very small Koalas dataframe for this example. Koalas dataframes can also be created from pandas dataframes, Spark dataframes, or read in directly from a file." ] }, { "cell_type": "code", "execution_count": null, "id": "20acbac9", "metadata": { "nbsphinx": "hidden" }, "outputs": [], "source": [ "import pyspark.sql as sql\n", "spark = sql.SparkSession.builder.master('local[2]').config(\"spark.driver.extraJavaOptions\", \"-Dio.netty.tryReflectionSetAccessible=True\").config(\"spark.sql.shuffle.partitions\", \"2\").config(\"spark.driver.bindAddress\", \"127.0.0.1\").getOrCreate() " ] }, { "cell_type": "code", "execution_count": null, "id": "af6545db", "metadata": {}, "outputs": [], "source": [ "import featuretools as ft\n", "import databricks.koalas as ks\n", "id = [0, 1, 2, 3, 4]\n", "values = [12, -35, 14, 103, -51]\n", "koalas_df = ks.DataFrame({\"id\": id, \"values\": values})\n", "koalas_df" ] }, { "cell_type": "markdown", "id": "3c27b229", "metadata": {}, "source": [ "Now that we have our Koalas dataframe, we can start to create the ``EntitySet``. Inferring Woodwork logical types for the columns in a Koalas dataframe can be computationally expensive. To avoid this expense, logical type inference can be skipped by supplying a dictionary of logical types using the `logical_types` parameter when calling `es.add_dataframe()`. Logical types can be specified as Woodwork LogicalType classes, or their equivalent string representation. For more information on using Woodwork types refer to the [Woodwork Typing in Featuretools](../getting_started/woodwork_types.ipynb) guide.\n", "\n", "Aside from supplying the logical types, the rest of the process of creating an `EntitySet` is the same as if we were using pandas DataFrames." ] }, { "cell_type": "code", "execution_count": null, "id": "b5ca0be3", "metadata": {}, "outputs": [], "source": [ "from woodwork.logical_types import Double, Integer\n", "\n", "es = ft.EntitySet(id=\"koalas_es\")\n", "es = es.add_dataframe(\n", " dataframe_name=\"koalas_input_df\",\n", " dataframe=koalas_df,\n", " index=\"id\",\n", " logical_types={\"id\": Integer, \"values\": Double})\n", "\n", "es" ] }, { "cell_type": "markdown", "id": "9d1b8525", "metadata": {}, "source": [ "## Running DFS\n", "\n", "We can pass the ``EntitySet`` we created above to ``featuretools.dfs`` in order to create a feature matrix. If the ``EntitySet`` we pass to ``dfs`` is made of Koalas dataframes, the feature matrix we get back will be a Koalas dataframe." ] }, { "cell_type": "code", "execution_count": null, "id": "bc48d985", "metadata": {}, "outputs": [], "source": [ "feature_matrix, features = ft.dfs(entityset=es,\n", " target_dataframe_name=\"koalas_input_df\",\n", " trans_primitives=[\"negate\"],\n", " max_depth=1)\n", "feature_matrix" ] }, { "cell_type": "markdown", "id": "a4f8ccfd", "metadata": {}, "source": [ "This feature matrix can be saved to disk or converted to a pandas dataframe and brought into memory, using the appropriate Koalas dataframe methods.\n", "\n", "While this is a simple example to illustrate the process of using Koalas dataframes with Featuretools, this process will also work with an ``EntitySet`` containing multiple dataframes, as well as with aggregation primitives.\n", "\n", "## Limitations\n", "\n", "The key functionality of Featuretools is available for use with a Koalas ``EntitySet``, and work is ongoing to add the remaining functionality that is available when using a pandas ``EntitySet``. There are, however, some limitations to be aware of when creating a Koalas ``Entityset`` and then using it to generate a feature matrix. The most significant limitations are reviewed in more detail in this section." ] }, { "cell_type": "raw", "id": "71efc4dc", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. note::\n", " If the limitations of using a Koalas ``EntitySet`` are problematic for your problem, you may still be able to compute a larger-than-memory feature matrix by partitioning your data as described in :doc:`performance`." ] }, { "cell_type": "markdown", "id": "854c0156", "metadata": {}, "source": [ "### Supported Primitives\n", "\n", "When creating a feature matrix from a Koalas ``EntitySet``, only certain primitives can be used. Primitives that rely on the order of the entire dataframe or require an entire column for computation are currently not supported when using a Koalas ``EntitySet``. Multivariable and time-dependent aggregation primitives also are not currently supported.\n", "\n", "To obtain a list of the primitives that can be used with a Koalas ``EntitySet``, you can call ``featuretools.list_primitives()``. This will return a table of all primitives. Any primitive that can be used with a Koalas ``EntitySet`` will have a value of ``True`` in the ``koalas_compatible`` column." ] }, { "cell_type": "code", "execution_count": null, "id": "bed6a8c8", "metadata": {}, "outputs": [], "source": [ "primitives_df = ft.list_primitives()\n", "koalas_compatible_df = primitives_df[primitives_df[\"koalas_compatible\"] == True]\n", "koalas_compatible_df.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "1e5baee7", "metadata": {}, "outputs": [], "source": [ "koalas_compatible_df.tail()" ] }, { "cell_type": "markdown", "id": "8abc5442", "metadata": {}, "source": [ "### Primitive Limitations\n", "\n", "At this time, custom primitives created with ``featuretools.primitives.make_trans_primitive()`` or ``featuretools.primitives.make_agg_primitive()`` cannot be used for running deep feature synthesis on a Koalas ``EntitySet``. While it is possible to create custom primitives for use with a Koalas ``EntitySet`` by extending the proper primitive class, there are several potential problems in doing so, and those issues are beyond the scope of this guide.\n", "\n", "### DataFrame Limitations\n", "\n", "Featuretools stores the DataFrames that make up an EntitySet as Woodwork DataFrames, which include additional typing information about the columns that are in the DataFrame. When adding a DataFrame to an `EntitySet`, Woodwork will attempt to infer the logical types for any columns that do not have a logical type defined. This inference process can be quite expensive for Koalas DataFrames. In order to skip type inference and speed up the process of adding a Koalas DataFrame to an `EntitySet`, users can specify the logical type to use for each column in the DataFrame. A list of available logical types can be obtained by running ``featuretools.list_logical_types()``. To learn more about the limitations of a Koalas dataframe with Woodwork typing, see the [Woodwork guide on Koalas dataframes](https://woodwork.alteryx.com/en/stable/guides/using_woodwork_with_dask_and_koalas.html#Koalas-DataFrame-Example).\n", "\n", "By default, Woodwork checks that pandas dataframes have unique index values. Because performing this same check with Koalas could be computationally expensive, this check is not performed when adding a Koalas dataframe to an `EntitySet`. When using Koalas dataframes, users must ensure that the supplied index values are unique.\n", "\n", "When using a pandas DataFrames, the ordering of the underlying DataFrame rows is maintained by Featuretools. For a Koalas DataFrame, the ordering of the DataFrame rows is not guaranteed, and Featuretools does not attempt to maintain row order in a Koalas DataFrame. If ordering is important, close attention must be paid to any output to avoid issues.\n", "\n", "### EntitySet Limitations\n", "\n", "When creating a Featuretools ``EntitySet`` that will be made of Koalas dataframes, all of the dataframes used to create the ``EntitySet`` must be of the same type, either all Koalas dataframe, all Dask dataframes, or all pandas dataframes. Featuretools does not support creating an ``EntitySet`` containing a mix of Koalas, Dask, and pandas dataframes.\n", "\n", "Additionally, ``EntitySet.add_interesting_values()`` cannot be used in Koalas EntitySets to find interesting values; however, it can be used set a column's interesting values with the `values` parameter." ] }, { "cell_type": "code", "execution_count": null, "id": "6e13e5ef", "metadata": {}, "outputs": [], "source": [ "values_dict = {'values': [12, 103]}\n", "es.add_interesting_values(dataframe_name='koalas_input_df', values=values_dict)\n", "\n", "es['koalas_input_df'].ww.columns['values'].metadata" ] }, { "cell_type": "markdown", "id": "80f6b033", "metadata": {}, "source": [ "\n", "### DFS Limitations\n", "\n", "There are a few key limitations when generating a feature matrix from a Koalas ``EntitySet``.\n", "\n", "If a ``cutoff_time`` parameter is passed to ``featuretools.dfs()`` it should be a single cutoff time value, or a pandas dataframe. The current implementation will still work if a Koalas dataframe is supplied for cutoff times, but a ``.to_pandas()`` call will be made on the dataframe to convert it into a pandas dataframe. This conversion will result in a warning, and the process could take a considerable amount of time to complete depending on the size of the supplied dataframe.\n", "\n", "Additionally, Featuretools does not currently support the use of the ``approximate`` or ``training_window`` parameters when working with Koalas EntitySets, but should in future releases.\n", "\n", "Finally, if the output feature matrix contains a boolean column with ``NaN`` values included, the column type may have a different datatype than the same feature matrix generated from a pandas ``EntitySet``. If feature matrix column data types are critical, the feature matrix should be inspected to make sure the types are of the proper types, and recast as necessary.\n", "\n", "### Other Limitations\n", "\n", "Currently ``featuretools.encode_features()`` does not work with a Koalas dataframe as input. This will hopefully be resolved in a future release of Featuretools.\n", "\n", "The utility function ``featuretools.make_temporal_cutoffs()`` will not work properly with Koalas inputs for ``instance_ids`` or ``cutoffs``. However, as noted above, if a ``cutoff_time`` dataframe is supplied to ``dfs``, the supplied dataframe should be a pandas dataframe, and this can be generated by supplying pandas inputs to ``make_temporal_cutoffs()``.\n", "\n", "The use of ``featuretools.remove_low_information_features()`` cannot currently be used with a Koalas feature matrix.\n", "\n", "When manually defining a ``Feature``, the ``use_previous`` parameter cannot be used if this feature will be applied to calculate a feature matrix from a Koalas ``EntitySet``." ] } ], "metadata": { "celltoolbar": "Raw Cell Format", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 5 }