Benchmarking resnet50 on tensorflow with cpu as backend and iree with cpu as backend #8870
Unanswered
HemaSowjanyaMamidi
asked this question in
Q&A
Replies: 2 comments 1 reply
-
Please try with these flags: https://github.com/powderluv/transformer-benchmarks/blob/e6abc1304956a059d530ce8eea29a01d491d73e8/benchmark.py#L295 |
Beta Was this translation helpful? Give feedback.
1 reply
-
How are you measuring this? What are the units? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi All,
@powderluv, @benvanik, @stellaraccident
I am new to this IREE compiler so I tried checking out the inference speeds on this compiler and with TF on the CPU. I observed that TF on the CPU runs faster than the IREE compiler. Can anyone tell the reason why the inference is slow?
Any resources where I can get the information on why IREE is running slower could be really helpful.
Can optimizations be done to the code to make it run faster? If so can you please guide me.
Code: https://github.com/google/iree/blob/main/colab/resnet.ipynb
I used the same code from the IREE Github.
I sharing the table here.
Thanks,
Hema
Beta Was this translation helpful? Give feedback.
All reactions