Mlflow 和 KerasTuner 集成

问题描述 投票:0回答:2

我正在尝试将

KerasTuner
Mlflow
整合在一起。我想记录 Keras Tuner 每次试验的每个时期的损失。

我的做法是:


class MlflowCallback(tf.keras.callbacks.Callback):
    
    # This function will be called after each epoch.
    def on_epoch_end(self, epoch, logs=None):
        if not logs:
            return
        # Log the metrics from Keras to MLflow     
        mlflow.log_metric("loss", logs["loss"], step=epoch)
    

from kerastuner.tuners import RandomSearch

with mlflow.start_run(run_name="myrun", nested=True) as run:
  
  tuner = RandomSearch(
      train_fn,
      objective='loss',
      max_trials=25, 
  )
  tuner.search(train,
              validation_data=validation, 
              validation_steps=validation_steps,
              steps_per_epoch=steps_per_epoch, 
              epochs=5, 
              callbacks=[MlflowCallback()]
  )

但是,损失值是在一次实验中(按顺序)报告的。有没有办法独立记录它们?

tensorflow keras callback mlflow keras-tuner
2个回答
0
投票

线路

with `mlflow.start_run(run_name="myrun", nested=True)` as run:

导致每次训练都存储在同一个实验中。不要使用它,

mlflow
会自动为
tuner.search

执行的每次训练创建一个不同的实验

0
投票

答案非常简单,您需要从 KerasTuner 继承 HyperModel,如下所示:

# Create a HyperModel subclass
class SGNNHyperModel(keras_tuner.HyperModel):

    def build(self, hp):
        # Create your model, set some hyper-parameters here
        model = SomeModel()

        return model

    def fit(self, hp, model, *args, **kwargs):
        with mlflow.start_run():
            mlflow.log_params(hp.values)
            mlflow.tensorflow.autolog()
            return model.fit(*args, **kwargs)

并像这样使用这个类:

tuner = BayesianOptimization(
    SGNNHyperModel(),
    max_trials=20,
    # Do not resume the previous search in the same directory.
    overwrite=True,
    objective="val_loss",
    # Set a directory to store the intermediate results.
    directory="/tmp/tb",
)

参考:
https://medium.com/@m.nusret.ozates/using-mlflow-with-keras-tuner-f6df5dd634bc

© www.soinside.com 2019 - 2024. All rights reserved.