我正在尝试在 colab 笔记本中复制 here 中显示的多模式转换器教程。然而,这是一个相对较旧的脚本,lightning.pytorch 发生了显着变化。我已将它适应新的闪电,当我在 Trainer 中删除回调参数时它会运行,但是当我添加回调时它会抛出以下错误:
/usr/local/lib/python3.9/dist-packages/lightning/pytorch/utilities/model_helpers.py in is_overridden(method_name, instance, parent)
32 parent = pl.Callback
33 if parent is None:
---> 34 raise ValueError("Expected a parent")
35
36 from lightning_utilities.core.overrides import is_overridden as _is_overridden
ValueError: Expected a parent
这里是调用trainer的fit方法
def fit(self):
# print(self)
self._set_seed(self.hparams.get("random_state", 42))
# self.trainer = pl.Trainer()
self.trainer = pl.Trainer(callbacks=self.trainer_params)
和构建回调列表的 _get_trainer_params 方法
def _get_trainer_params(self):
checkpoint_callback = pl.callbacks.ModelCheckpoint(
dirpath=self.output_path,
# filepath=self.output_path,
monitor=self.hparams.get(
"checkpoint_monitor", "avg_val_loss"
),
mode=self.hparams.get(
"checkpoint_monitor_mode", "min"
),
verbose=self.hparams.get("verbose", True)
)
early_stop_callback = pl.callbacks.EarlyStopping(
monitor=self.hparams.get(
"early_stop_monitor", "avg_val_loss"
),
min_delta=self.hparams.get(
"early_stop_min_delta", 0.001
),
patience=self.hparams.get(
"early_stop_patience", 3
),
verbose=self.hparams.get("verbose", True),
)
trainer_params = [
checkpoint_callback,
early_stop_callback,
self.output_path,
self.hparams.get(
"accumulate_grad_batches", 1
),
self.hparams.get("n_gpu", 1),
self.hparams.get("max_epochs", 100),
self.hparams.get(
"gradient_clip_value", 1
)
]
再次,当我在没有回调的情况下运行训练器时,
i.e. self.trainer = pl.Trainer()
模型运行。