Replies: 1 comment
-
x86_64 and CUDA workflow migration steps:
Make sure 1 and 6 are ready to merge (pass presubmit) in advance. 1 to 6 could be done under 30 mins. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As we are almost ready to switch our current x86_64 and CUDA CI benchmarking from Buildkite to Github CI, here are some changes will happen:
Changes
Changes on the PR benchmark system
buildkite:benchmark-x86_64 buildkite:benchmark-cuda buildkite:benchmark-riscv
will stop working. The new way to trigger benchmark CI is by adding a linebenchmarks:...
at the bottom of your PR description.all
,x86_64
,cuda
, andcomp-stats
buildkite:benchmark-android
is still required to trigger the Android benchmarks until we migrate it.benchmarks: comp-stats
.Changes on the dashboard
Changes on the benchmark suites
Changes on running benchmarks locally
For x86_64 and CUDA benchmarks, you can also use the steps below to run the new benchmark suites:
First follow https://github.com/iree-org/iree/tree/main/benchmarks to install the import tool
iree-import-tflite
andiree-import-tf
. Then build the new benchmark suites and tools:Generate the benchmark config:
Run benchmarks:
${CPU_UARCH}
currently only supportsCascadeLake
Changes on the performance numbers
For the existing pull requests
Rebase your PRs after we finish the migration to pick up the changes. The benchmarks might not work w/o rebasing since we disable some pipelines globally.
How to fetch the benchmarks from CI and reproduce locally
As the new benchmark pipelines start using artificial ids in many places, it becomes hard for users to navigate and find the artifacts they want built by the CI. We are improving this for the new framework (#12215).
For now, here are the steps from "finding a regression on https://perf.iree.dev or PR benchmark summary" to "fetch the imported MLIR and benchmark artifact".
1. Identify the benchmark ID
In the URL of the https://perf.iree.dev series page, you can find the benchmark ID (referred to as
${BENCHMARK_ID}
below) right after/serie?IREE?
. For example:https://perf.iree.dev/serie?IREE?e379badd0273b6e66220653fdcb6384c4ef400863e488e2878bd93061a74e8bc
shows the benchmark IDe379badd0273b6e66220653fdcb6384c4ef400863e488e2878bd93061a74e8bc
.For compilation benchmarks, the format will be
/serie?IREE?<benchmark ID>-<metric ID>
. Only uses the<benchmark ID>
in the following steps.2. Find the corresponding CI workflow run
On the commit of your benchmark run, you can find the list of the workflow jobs by clicking the green check mark. Find the job
build_e2e_test_artifacts
and clickDetails
:On the detail page, expand the step
Uploading e2 test artifacts
, you will see a bunch of lines like:Copy the
gs://iree-github-actions-...-artifacts/.../.../
(e.g.gs://iree-github-actions-postsubmit-artifacts/4181832549/1/
) for the following steps (referred to as${ARTIFACTS_GCS_DIR}
below).If you only need to fetch some artifacts and already know their paths (e.g.
iree_.../module.vmfb
), you can usegcloud
to download the file. For example:3. Fetch benchmark artifacts and explain
You can get a temporary helper tool
benchmark_user_tool.py
with:to fetch and explain the benchmark.
With the
${ARTIFACTS_GCS_DIR}
and"${BENCHMARK_ID}"
you obtained in the previous steps, run:It will fetch the related artifacts to local (by default
./iree-github-actions-xxx-artifacts-xxx-x
) and show some benchmark information. An example output:Note that currently some information is retrieved from the local repo, so it's better to run the tool on the commit you want to investigate for the correctness.
Download all imported models
You can download all imported models with
gcloud
tool by running:Migration progress
Beta Was this translation helpful? Give feedback.
All reactions