如何为以下代码配置条件参数规范,其中,当惩罚为 l2 时,求解器应为“sag”,当惩罚为“l1”时,求解器应为“saga”。
from google.cloud.aiplatform import hyperparameter_tuning as hpt
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_K80",
"accelerator_count": 1,
},
"replica_count": 1,
"container_spec": {
"image_uri": container_image_uri,
"command": [],
"args": [],
},
}
]
custom_job = aiplatform.CustomJob(
display_name='my_job',
worker_pool_specs=worker_pool_specs,
labels={'my_key': 'my_value'},
)
hp_job = aiplatform.HyperparameterTuningJob(
display_name='hp-test',
custom_job=job,
metric_spec={
'loss': 'minimize',
},
parameter_spec={
'C': hpt.DoubleParameterSpec(min=0.001, max=0.1, scale='log'),
'max_iter': hpt.IntegerParameterSpec(min=4, max=128, scale='linear'),
'penalty': hpt.CategoricalParameterSpec(values=['l1', 'l2']),
'solver': hpt.CategoricalParameterSpec(values=['sag', 'saga'])
},
max_trial_count=128,
parallel_trial_count=8,
labels={'my_key': 'my_value'},
)
hp_job.run()
print(hp_job.trials)
我查看了文档并尝试向 GPT 寻求帮助,但无法弄清楚。你们能帮忙吗?
在您的代码中,您可以进行一些故障排除,例如 -
您应该使用以下格式初始化项目和区域记录在此处。
def create_hyperparameter_tuning_job_sample(
project: str,
location: str,
staging_bucket: str,
display_name: str,
container_uri: str,
):
aiplatform.init(project=project, location=location, staging_bucket=staging_bucket)
container_image_uri
在您的代码中 - 检查设置是否是有效的 Docker 映像。要进行详细调查,您可以创建一个公共问题跟踪器来描述您的问题并投票[+1],工程团队将查看该问题。