我有run_command_list.txt
,每行禁止一个命令:
time python3 train.py --dataroot ./datasets/maps --name maps_pix2pix --model pix2pix --direction AtoB --checkpoints_dir maps_pix2pix_a_to_b_bs_1 --batch_size 1 > bs_1.log
time python3 train.py --dataroot ./datasets/maps --name maps_pix2pix --model pix2pix --direction AtoB --checkpoints_dir maps_pix2pix_a_to_b_bs_2 --batch_size 2 > bs_2.log
time python3 train.py --dataroot ./datasets/maps --name maps_pix2pix --model pix2pix --direction AtoB --checkpoints_dir maps_pix2pix_a_to_b_bs_4 --batch_size 4 > bs_4.log
...
我想并行运行不超过2个作业,并且我想根据当前可用的GPU设置CUDA_VISIBLE_DEVICES = 0或CUDA_VISIBLE_DEVICES = 1,如何使用parallel
或xargs
做到这一点?] >
即类似于cat run_command_list.txt | xargs -n 1 -P 2
我有run_command_list.txt,每行禁止一个命令:time python3 train.py --dataroot ./datasets/maps --name maps_pix2pix --model pix2pix --direction AtoB --checkpoints_dir ...
seq 1000 |
parallel -j4 CUDA_VISIBLE_DEVICES='{=1 $_=slot()-1 =}' time python3 train.py --dataroot ./datasets/maps --name maps_pix2pix --model pix2pix --direction AtoB --checkpoints_dir maps_pix2pix_a_to_b_bs_{} --batch_size {} '>' bs_{}.log