This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.
Note: Completely broken. Do not use.
Benchmarks
Average 59.52
ARC 59.47
HellaSwag 82.42
MMLU 62.21
TruthfulQA 40.01
Winogrande 78.3
GSM8K 34.72
Training Details
Duration: ~10-12 hours on one Kaggle T4 with Unsloth
Model: https://huggingface.co./unsloth/mistral-7b-v0.2-bnb-4bit
Dataset: https://huggingface.co./datasets/argilla/dpo-mix-7k
Rank: 8
Alpha: 16
Learning rate: 5e-6
Beta: 0.1
Batch size: 8
Epochs: 1
Learning rate scheduler: Linear
Prompt Format: You are a helpful assistant.<s>[INST] PROMPT [/INST]RESPONSE</s>
(The start token <s> must be added manually and not automatically)
- Downloads last month
- 67
Inference API (serverless) is not available, repository is disabled.
Collection including G-reen/EXPERIMENT-DPO-m7b2-1-merged
Collection
Several trained models to compare the differences between each method. Each model has a complete description of hyperparams with wandb reports.
•
11 items
•
Updated