Skip to content

Op type not registered 'Addons>ParseTime' #274

@pritamdodeja

Description

@pritamdodeja

Issue

Pipeline stored to disk does not construct the entire transformation graph unless Lambda layer is manually invoked.

Steps to reproduce:

Clone the repo

git clone https://github.com/pritamdodeja/tft_tasks
cd tft_tasks
git checkout 732b0

Run the entire lifecycle of supported tasks to create the various artifacts

python tft_tasks.py --task clean_directory --task write_raw_tfrecords --task view_original_sample_data --task transform_tfrecords --task view_transformed_sample_data --task train_non_embedding_model --task train_embedding_model

Comment out lines 434 to 437 in tft_tasks.py containing the following:

434     layers.Lambda(                                                                                                                                                      
435         fn_seconds_since_1970,                                                                                                                                          
436         name="seconds_since_1970")(                                                                                                                                     
437         transformed_inputs['pickup_datetime'])     

Run the following:

python tft_tasks.py --task view_transformed_sample_data

Get the following error:

FileNotFoundError: Op type not registered 'Addons>ParseTime' in binary running on fedora. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.                                                                                                                   
 You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'. 

If you un-comment those lines and re-run the same command again, it runs fine. My understanding of tft was that if you write preprocessing_fn following the various rules, that it will capture the entire set of transformations for you. Why isn't the Lambda layer captured, and what is the right thing to do in a situation such as this?

Thank you!

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions