You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run inference on larger image size, e.g. 128x128 while the model was trained on images of size 64x64. But the model cannot be restored.
I have modified scripts/generate.py with one more args flag: infer_read_pics, which will change the placeholder shape to 128x128 for inference. The data being fed into the placeholder is also resized to 128x128:
I have also commented below code inside video_prediction/models/savp_model.py to make sure the model architecture is the same as the model trained using 64x64 images:
Hi Alex, thank you so much for the code release.
I am trying to run inference on larger image size, e.g. 128x128 while the model was trained on images of size 64x64. But the model cannot be restored.
I have modified scripts/generate.py with one more args flag:
infer_read_pics
, which will change the placeholder shape to 128x128 for inference. The data being fed into the placeholder is also resized to 128x128:I have also commented below code inside video_prediction/models/savp_model.py to make sure the model architecture is the same as the model trained using 64x64 images:
But the model cannot be restored due to the error below:
According to the error, I have traced to the this line in video_prediction/ops.py:
kernel_shape = [input_shape[1], units]
.Do you have any suggestions about how to make it work for running inference on arbitrary image sizes?
Thanks for reading my question!
The text was updated successfully, but these errors were encountered: