-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Integrate mlflow tracking capability into mlbox to track hyper-parameter tuning results #85
Comments
Key requirement is to provide backward compatibility. Additional details on the proposal. The signature for the
The new parameter If mlflow function is desired, then the call to
mlflow related parameters: These are example output for a working prototype of integrating mlflow with mlbox:
Example of mlflow UI for results of the hyper-parameter tuning: Example of details captured for a single run: Sample chart generated from the hyperpamater tuning: CSV extract of hyperparameter tuning results |
Hello @jimthompson5802, |
@AxeldeRomblay Thank you for the feedback. I understand the you'd like to hide mlflow as much as possible from the user. This should be doable. One implication of this approach is that instead of using the mlflow web UI, which requires some knowledge of how mlflow works, to view results of the I'll proceed along these lines. If you liked to see an early preview of this work, it is in my fork of your repo. It is in the branch Re: Python requirements for mlflow. mlflow runs under Python 2.7 and Python3.x. This is the package dependencies from mlflow's
|
This is initial attempt at hiding mlflow integration with mlbox. This code fragment from the pytest module testing mlflow integration show the sequence of api calls
This is the Excel workbook created from the cvs data extracted from the pandas dataframe of mlflow captured data from the |
Here is a status update on the work. I understand a key requirement is to hide the mlflow api from someone using mlbox. While this is possible, I believe it will be useful to provide visibility to mlflow concepts of experiments and runs. Briefly, an 'experiment' is a collection of related 'runs', where a 'run' represents a single instance of running a ML algorithm with specific settings of its hyperparamters. In this sense To support this design objective, I've modified the
if if A new method was added to the
This attached zip file |
Integrate mlflow metric tracking capabilities with mlbox to track hyperparameter tuning results.
Specifically, use
mlflow
to record results fromopt.optimise(space, df)
, such as theseBy using mlflow to track these kind of results, we are able to use mlflow web tracking ui to review and report on those results. This capability is demonstrated in this mlflow talk.
Current proposal is to record each
### testing hyper-parameters...###
as mlflow experiments and runs that records the model algorithm, hyperparameter settings and resulting metric.Does this sound like a reasonable addition to mlbox capabilities?
The text was updated successfully, but these errors were encountered: