Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DPO训练的时候grad_norm出现nan值 #923

Open
rtz1998 opened this issue May 13, 2024 · 5 comments
Open

DPO训练的时候grad_norm出现nan值 #923

rtz1998 opened this issue May 13, 2024 · 5 comments

Comments

@rtz1998
Copy link

rtz1998 commented May 13, 2024

使用Qwen1.5-7B-Chat在dpo训练的时候出现grad_norm出现Nan值,然后模型不更新

  1. 尝试将dtype变成fp32依然出现该情况

image

@tastelikefeet
Copy link
Collaborator

跑飞了,lr怎么设置的

@pengwork
Copy link

pengwork commented Jul 15, 2024

我也是跑飞了 qwen14b 8xA100 全参数 微调 dpo
用的通用数据集测试的

调整过精度,还有什么推荐的修改方案么?感谢大佬们

日志:

Train:  12%|█▎        | 172/1376 [30:54:33<188:42:15, 564.23s/it]{'loss': 1.00900269, 'grad_norm': 7.53907044362896, 'learning_rate': 1.1904761904761904e-06, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001342, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -126.5, 'logps/chosen': -94.0, 'logps/ref_rejected': -126.5, 'logps/ref_chosen': -94.0, 'logits/rejected': -2.203125, 'logits/chosen': -2.28125, 'epoch': 0.005788712011577424, 'step': 1}
{'loss': 0.98021274, 'grad_norm': 5.976240092344901, 'learning_rate': 1.1904761904761905e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001405, 'rewards/chosen': 0.01080322265625, 'rewards/rejected': -0.158203125, 'rewards/accuracies': 0.4930555522441864, 'rewards/margins': 0.1689453125, 'logps/rejected': -140.0, 'logps/chosen': -105.5, 'logps/ref_rejected': -138.0, 'logps/ref_chosen': -105.5, 'logits/rejected': -2.234375, 'logits/chosen': -2.296875, 'epoch': 0.05788712011577424, 'step': 10}
{'loss': 0.92940826, 'grad_norm': 6.5935847275999455, 'learning_rate': 2.380952380952381e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001404, 'rewards/chosen': 0.5546875, 'rewards/rejected': 0.23046875, 'rewards/accuracies': 0.606249988079071, 'rewards/margins': 0.322265625, 'logps/rejected': -120.5, 'logps/chosen': -102.5, 'logps/ref_rejected': -123.0, 'logps/ref_chosen': -108.0, 'logits/rejected': -2.328125, 'logits/chosen': -2.34375, 'epoch': 0.11577424023154848, 'step': 20}
{'loss': 0.92114563, 'grad_norm': 5.420887947787812, 'learning_rate': 3.571428571428572e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001412, 'rewards/chosen': 0.255859375, 'rewards/rejected': -0.220703125, 'rewards/accuracies': 0.6156250238418579, 'rewards/margins': 0.4765625, 'logps/rejected': -136.0, 'logps/chosen': -102.0, 'logps/ref_rejected': -133.0, 'logps/ref_chosen': -104.5, 'logits/rejected': -2.171875, 'logits/chosen': -2.21875, 'epoch': 0.1736613603473227, 'step': 30}
{'loss': 0.95710449, 'grad_norm': 5.779980665056525, 'learning_rate': 4.761904761904762e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001402, 'rewards/chosen': 0.1767578125, 'rewards/rejected': -0.3515625, 'rewards/accuracies': 0.6156250238418579, 'rewards/margins': 0.52734375, 'logps/rejected': -117.5, 'logps/chosen': -109.0, 'logps/ref_rejected': -114.0, 'logps/ref_chosen': -111.0, 'logits/rejected': -1.9921875, 'logits/chosen': -2.015625, 'epoch': 0.23154848046309695, 'step': 40}
{'loss': 0.95733261, 'grad_norm': 5.125722329220833, 'learning_rate': 4.970014992503748e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001395, 'rewards/chosen': -0.031982421875, 'rewards/rejected': -0.79296875, 'rewards/accuracies': 0.671875, 'rewards/margins': 0.76171875, 'logps/rejected': -131.0, 'logps/chosen': -100.5, 'logps/ref_rejected': -123.0, 'logps/ref_chosen': -100.0, 'logits/rejected': -2.109375, 'logits/chosen': -2.15625, 'epoch': 0.2894356005788712, 'step': 50}
{'loss': 0.96949158, 'grad_norm': 4.608084507852282, 'learning_rate': 4.932533733133434e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001389, 'rewards/chosen': -0.12158203125, 'rewards/rejected': -0.8828125, 'rewards/accuracies': 0.6343749761581421, 'rewards/margins': 0.7578125, 'logps/rejected': -141.0, 'logps/chosen': -103.0, 'logps/ref_rejected': -132.0, 'logps/ref_chosen': -102.0, 'logits/rejected': -2.140625, 'logits/chosen': -2.203125, 'epoch': 0.3473227206946454, 'step': 60}
{'loss': 1.02169342, 'grad_norm': 3.904740588497704, 'learning_rate': 4.8950524737631185e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001384, 'rewards/chosen': -0.3828125, 'rewards/rejected': -1.1171875, 'rewards/accuracies': 0.65625, 'rewards/margins': 0.73046875, 'logps/rejected': -135.0, 'logps/chosen': -99.5, 'logps/ref_rejected': -124.0, 'logps/ref_chosen': -95.5, 'logits/rejected': -2.0625, 'logits/chosen': -2.109375, 'epoch': 0.40520984081041966, 'step': 70}
{'loss': 0.99144592, 'grad_norm': 5.000990374137424, 'learning_rate': 4.857571214392804e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001385, 'rewards/chosen': -0.2470703125, 'rewards/rejected': -0.9296875, 'rewards/accuracies': 0.625, 'rewards/margins': 0.68359375, 'logps/rejected': -127.0, 'logps/chosen': -106.5, 'logps/ref_rejected': -117.5, 'logps/ref_chosen': -104.0, 'logits/rejected': -2.171875, 'logits/chosen': -2.21875, 'epoch': 0.4630969609261939, 'step': 80}
{'loss': 1.02521057, 'grad_norm': 10.307696086068754, 'learning_rate': 4.820089955022489e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001417, 'rewards/chosen': -0.11572265625, 'rewards/rejected': -1.078125, 'rewards/accuracies': 0.6781250238418579, 'rewards/margins': 0.96484375, 'logps/rejected': -135.0, 'logps/chosen': -100.5, 'logps/ref_rejected': -124.5, 'logps/ref_chosen': -99.0, 'logits/rejected': -2.390625, 'logits/chosen': -2.453125, 'epoch': 0.5209840810419681, 'step': 90}
{'loss': 1.05500183, 'grad_norm': 6.985147449500718, 'learning_rate': 4.782608695652174e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001449, 'rewards/chosen': -0.2236328125, 'rewards/rejected': -1.03125, 'rewards/accuracies': 0.6156250238418579, 'rewards/margins': 0.8125, 'logps/rejected': -144.0, 'logps/chosen': -93.5, 'logps/ref_rejected': -134.0, 'logps/ref_chosen': -91.0, 'logits/rejected': -2.5, 'logits/chosen': -2.5625, 'epoch': 0.5788712011577424, 'step': 100}
{'loss': 124.85273437, 'grad_norm': nan, 'learning_rate': 4.7451274362818594e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001478, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.4437499940395355, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -109.5, 'logps/ref_chosen': -95.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.6367583212735166, 'step': 110}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.7076461769115446e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.0015, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -120.0, 'logps/ref_chosen': -99.5, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.6946454413892909, 'step': 120}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.67016491754123e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001517, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -116.0, 'logps/ref_chosen': -102.5, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.7525325615050651, 'step': 130}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.632683658170915e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.00153, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -121.5, 'logps/ref_chosen': -106.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.8104196816208393, 'step': 140}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.5952023988006e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001537, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -136.0, 'logps/ref_chosen': -108.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.8683068017366136, 'step': 150}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.557721139430285e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001533, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -136.0, 'logps/ref_chosen': -96.5, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.9261939218523878, 'step': 160}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.5202398800599706e-05, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001543, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -131.0, 'logps/ref_chosen': -105.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.984081041968162, 'step': 170}

脚本:

python3 pytorch/llm/llm_rlhf.py \
    --rlhf_type dpo \
    --model_type  qwen-14b-chat \
    --model_id_or_path $MODEL  \
    --ref_model_type  qwen-14b-chat \
    --ref_model_id_or_path $MODEL  \
    --sft_type  full \
    --tuner_backend  swift \
    --ddp_backend nccl \
    --dtype  bf16  \
    --output_dir  $OUTPUT_DIR  \
    --dataset  hh-rlhf-cn:harmless_base_cn  \
    --num_train_epochs  8  \
    --max_length  2048  \
    --max_prompt_length  2048  \
    --check_dataset_strategy  none  \
    --lora_rank  8  \
    --lora_alpha  32  \
    --lora_dropout_p  0.05  \
    --lora_target_modules  ALL  \
    --gradient_checkpointing  true  \
    --batch_size  2  \
    --weight_decay  0.1  \
    --learning_rate  5e-5  \
    --gradient_accumulation_steps  16  \
    --max_grad_norm  1.0  \
    --lr_scheduler_type linear \
    --warmup_ratio  0.03  \
    --eval_steps  2000  \
    --save_steps  2000  \
    --save_total_limit  2  \
    --logging_steps  10 \
    --save_on_each_node false  \
    --deepspeed pytorch/llm/custom-zero3-A100.json \
    --report_to none

ds设置

{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },

    "bf16": {
        "enabled": "auto"
    },

    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": "auto",
            "weight_decay": "auto"
        }
    },

    "zero_optimization": {
        "stage": 3,
        "offload_optimizer": {
            "device": "none",
            "pin_memory": true
        },
        "offload_param": {
            "device": "none",
            "pin_memory": true
        },
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto",
        "stage3_prefetch_bucket_size": "auto",
        "stage3_param_persistence_threshold": "auto",
        "stage3_max_live_parameters": 1e9,
        "stage3_max_reuse_distance": 1e9,
        "stage3_gather_16bit_weights_on_model_save": true
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": false
}

@pengwork
Copy link

lr 调整 还是会有Nan


--learning_rate 5e-5
调整到
--learning_rate 5e-7 \

{'loss': 1.00900269, 'grad_norm': 7.53907044362896, 'learning_rate': 1.1904761904761903e-08, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001278, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -126.5, 'logps/chosen': -94.0, 'logps/ref_rejected': -126.5, 'logps/ref_chosen': -94.0, 'logits/rejected': -2.203125, 'logits/chosen': -2.28125, 'epoch': 0.005788712011577424, 'step': 1}
{'loss': 1.00108168, 'grad_norm': 7.545338994874301, 'learning_rate': 1.1904761904761903e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001354, 'rewards/chosen': 0.0013885498046875, 'rewards/rejected': 0.00074005126953125, 'rewards/accuracies': 0.2395833283662796, 'rewards/margins': 0.00064849853515625, 'logps/rejected': -138.0, 'logps/chosen': -105.5, 'logps/ref_rejected': -138.0, 'logps/ref_chosen': -105.5, 'logits/rejected': -2.15625, 'logits/chosen': -2.21875, 'epoch': 0.05788712011577424, 'step': 10}
{'loss': 1.00213623, 'grad_norm': 7.108545247409468, 'learning_rate': 2.3809523809523806e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.00137, 'rewards/chosen': 0.00139617919921875, 'rewards/rejected': -0.0006103515625, 'rewards/accuracies': 0.3031249940395355, 'rewards/margins': 0.00201416015625, 'logps/rejected': -123.0, 'logps/chosen': -108.0, 'logps/ref_rejected': -123.0, 'logps/ref_chosen': -108.0, 'logits/rejected': -2.171875, 'logits/chosen': -2.1875, 'epoch': 0.11577424023154848, 'step': 20}
{'loss': 1.00210571, 'grad_norm': 8.304970351713742, 'learning_rate': 3.5714285714285716e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001372, 'rewards/chosen': 0.0093994140625, 'rewards/rejected': 0.00090789794921875, 'rewards/accuracies': 0.3375000059604645, 'rewards/margins': 0.00848388671875, 'logps/rejected': -133.0, 'logps/chosen': -104.5, 'logps/ref_rejected': -133.0, 'logps/ref_chosen': -104.5, 'logits/rejected': -2.171875, 'logits/chosen': -2.21875, 'epoch': 0.1736613603473227, 'step': 30}
{'loss': 0.995401, 'grad_norm': 6.867666147083995, 'learning_rate': 4.761904761904761e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001374, 'rewards/chosen': 0.0234375, 'rewards/rejected': 0.016357421875, 'rewards/accuracies': 0.42500001192092896, 'rewards/margins': 0.007080078125, 'logps/rejected': -113.5, 'logps/chosen': -110.5, 'logps/ref_rejected': -114.0, 'logps/ref_chosen': -111.0, 'logits/rejected': -2.1875, 'logits/chosen': -2.203125, 'epoch': 0.23154848046309695, 'step': 40}
{'loss': 0.97894287, 'grad_norm': 6.637425408844079, 'learning_rate': 4.970014992503748e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001378, 'rewards/chosen': 0.0498046875, 'rewards/rejected': -0.00060272216796875, 'rewards/accuracies': 0.49687498807907104, 'rewards/margins': 0.050537109375, 'logps/rejected': -123.0, 'logps/chosen': -99.5, 'logps/ref_rejected': -123.0, 'logps/ref_chosen': -100.0, 'logits/rejected': -2.1875, 'logits/chosen': -2.234375, 'epoch': 0.2894356005788712, 'step': 50}
{'loss': 0.96344299, 'grad_norm': 6.994588656842213, 'learning_rate': 4.932533733133433e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001379, 'rewards/chosen': 0.08251953125, 'rewards/rejected': -0.01080322265625, 'rewards/accuracies': 0.590624988079071, 'rewards/margins': 0.09326171875, 'logps/rejected': -132.0, 'logps/chosen': -101.0, 'logps/ref_rejected': -132.0, 'logps/ref_chosen': -102.0, 'logits/rejected': -2.171875, 'logits/chosen': -2.25, 'epoch': 0.3473227206946454, 'step': 60}
{'loss': 0.95614014, 'grad_norm': 6.577610388553457, 'learning_rate': 4.895052473763119e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001379, 'rewards/chosen': 0.1376953125, 'rewards/rejected': 0.035888671875, 'rewards/accuracies': 0.5375000238418579, 'rewards/margins': 0.1015625, 'logps/rejected': -123.5, 'logps/chosen': -94.0, 'logps/ref_rejected': -124.0, 'logps/ref_chosen': -95.5, 'logits/rejected': -2.21875, 'logits/chosen': -2.265625, 'epoch': 0.40520984081041966, 'step': 70}
{'loss': 0.94986267, 'grad_norm': 6.366643716285433, 'learning_rate': 4.857571214392804e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.00138, 'rewards/chosen': 0.1904296875, 'rewards/rejected': 0.0751953125, 'rewards/accuracies': 0.5843750238418579, 'rewards/margins': 0.11572265625, 'logps/rejected': -117.0, 'logps/chosen': -102.0, 'logps/ref_rejected': -117.5, 'logps/ref_chosen': -104.0, 'logits/rejected': -2.25, 'logits/chosen': -2.28125, 'epoch': 0.4630969609261939, 'step': 80}
{'loss': 0.94568481, 'grad_norm': 6.969188850718788, 'learning_rate': 4.820089955022488e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.00138, 'rewards/chosen': 0.267578125, 'rewards/rejected': 0.10107421875, 'rewards/accuracies': 0.628125011920929, 'rewards/margins': 0.166015625, 'logps/rejected': -123.5, 'logps/chosen': -96.5, 'logps/ref_rejected': -124.5, 'logps/ref_chosen': -99.0, 'logits/rejected': -2.234375, 'logits/chosen': -2.296875, 'epoch': 0.5209840810419681, 'step': 90}
{'loss': 0.94615936, 'grad_norm': 6.46213116760357, 'learning_rate': 4.782608695652174e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.00138, 'rewards/chosen': 0.3359375, 'rewards/rejected': 0.158203125, 'rewards/accuracies': 0.6031249761581421, 'rewards/margins': 0.177734375, 'logps/rejected': -133.0, 'logps/chosen': -88.0, 'logps/ref_rejected': -134.0, 'logps/ref_chosen': -91.0, 'logits/rejected': -2.25, 'logits/chosen': -2.328125, 'epoch': 0.5788712011577424, 'step': 100}
{'loss': 108.31538086, 'grad_norm': nan, 'learning_rate': 4.7451274362818587e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001381, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.4124999940395355, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -109.5, 'logps/ref_chosen': -95.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.6367583212735166, 'step': 110}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.707646176911544e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.00138, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -120.0, 'logps/ref_chosen': -99.5, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.6946454413892909, 'step': 120}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.6701649175412295e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001379, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -116.0, 'logps/ref_chosen': -102.5, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.7525325615050651, 'step': 130}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.6326836581709143e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001377, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -121.5, 'logps/ref_chosen': -106.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.8104196816208393, 'step': 140}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.595202398800599e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001373, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -136.0, 'logps/ref_chosen': -108.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.8683068017366136, 'step': 150}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.5577211394302846e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001374, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -136.0, 'logps/ref_chosen': -96.5, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.9261939218523878, 'step': 160}
{'loss': 0.0, 'grad_norm': nan, 'learning_rate': 4.52023988005997e-07, 'memory(GiB)': 70.41, 'train_speed(iter/s)': 0.001375, 'rewards/chosen': nan, 'rewards/rejected': nan, 'rewards/accuracies': 0.0, 'rewards/margins': nan, 'logps/rejected': nan, 'logps/chosen': nan, 'logps/ref_rejected': -131.0, 'logps/ref_chosen': -105.0, 'logits/rejected': nan, 'logits/chosen': nan, 'epoch': 0.984081041968162, 'step': 170}

@tastelikefeet
Copy link
Collaborator

去掉deepspeed会不会有作用

@Jintao-Huang
Copy link
Collaborator

已经解决了不,拉取一下main分支

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants