| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|---|---|---|---|---|---|
| Mistral-7B-Instruct-v0.2 | 38.94 | 71.67 | 66.88 | 42.22 | 54.93 |
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| agieval_aqua_rat | 0 | acc | 23.23 | ± | 2.65 |
| acc_norm | 22.83 | ± | 2.64 | ||
| agieval_logiqa_en | 0 | acc | 36.25 | ± | 1.89 |
| acc_norm | 37.17 | ± | 1.90 | ||
| agieval_lsat_ar | 0 | acc | 21.30 | ± | 2.71 |
| acc_norm | 20.43 | ± | 2.66 | ||
| agieval_lsat_lr | 0 | acc | 38.04 | ± | 2.15 |
| acc_norm | 38.04 | ± | 2.15 | ||
| agieval_lsat_rc | 0 | acc | 53.53 | ± | 3.05 |
| acc_norm | 50.19 | ± | 3.05 | ||
| agieval_sat_en | 0 | acc | 68.93 | ± | 3.23 |
| acc_norm | 67.96 | ± | 3.26 | ||
| agieval_sat_en_without_passage | 0 | acc | 42.23 | ± | 3.45 |
| acc_norm | 40.78 | ± | 3.43 | ||
| agieval_sat_math | 0 | acc | 36.36 | ± | 3.25 |
| acc_norm | 34.09 | ± | 3.20 |
Average: 38.94%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| arc_challenge | 0 | acc | 54.01 | ± | 1.46 |
| acc_norm | 55.97 | ± | 1.45 | ||
| arc_easy | 0 | acc | 81.27 | ± | 0.80 |
| acc_norm | 76.60 | ± | 0.87 | ||
| boolq | 1 | acc | 85.38 | ± | 0.62 |
| hellaswag | 0 | acc | 66.00 | ± | 0.47 |
| acc_norm | 83.69 | ± | 0.37 | ||
| openbookqa | 0 | acc | 35.60 | ± | 2.14 |
| acc_norm | 45.40 | ± | 2.23 | ||
| piqa | 0 | acc | 80.03 | ± | 0.93 |
| acc_norm | 80.69 | ± | 0.92 | ||
| winogrande | 0 | acc | 73.95 | ± | 1.23 |
Average: 71.67%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| truthfulqa_mc | 1 | mc1 | 52.51 | ± | 1.75 |
| mc2 | 66.88 | ± | 1.52 |
Average: 66.88%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| bigbench_causal_judgement | 0 | multiple_choice_grade | 54.74 | ± | 3.62 |
| bigbench_date_understanding | 0 | multiple_choice_grade | 66.12 | ± | 2.47 |
| bigbench_disambiguation_qa | 0 | multiple_choice_grade | 39.53 | ± | 3.05 |
| bigbench_geometric_shapes | 0 | multiple_choice_grade | 21.45 | ± | 2.17 |
| exact_str_match | 9.47 | ± | 1.55 | ||
| bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 29.40 | ± | 2.04 |
| bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 20.43 | ± | 1.52 |
| bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 44.67 | ± | 2.88 |
| bigbench_movie_recommendation | 0 | multiple_choice_grade | 34.60 | ± | 2.13 |
| bigbench_navigate | 0 | multiple_choice_grade | 42.20 | ± | 1.56 |
| bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 60.80 | ± | 1.09 |
| bigbench_ruin_names | 0 | multiple_choice_grade | 55.36 | ± | 2.35 |
| bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 34.87 | ± | 1.51 |
| bigbench_snarks | 0 | multiple_choice_grade | 68.51 | ± | 3.46 |
| bigbench_sports_understanding | 0 | multiple_choice_grade | 65.62 | ± | 1.51 |
| bigbench_temporal_sequences | 0 | multiple_choice_grade | 36.90 | ± | 1.53 |
| bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.40 | ± | 1.18 |
| bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 17.71 | ± | 0.91 |
| bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 44.67 | ± | 2.88 |
Average: 42.22%
Average score: 54.93%
Elapsed time: 09:24:57