Model | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|
Llama-3.2-3B | 25.76 | Error: File does not exist | 39.22 | 34.61 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 20.87 | ± | 2.55 |
acc_norm | 23.23 | ± | 2.65 | ||
agieval_logiqa_en | 0 | acc | 23.96 | ± | 1.67 |
acc_norm | 32.57 | ± | 1.84 | ||
agieval_lsat_ar | 0 | acc | 17.83 | ± | 2.53 |
acc_norm | 17.39 | ± | 2.50 | ||
agieval_lsat_lr | 0 | acc | 24.12 | ± | 1.90 |
acc_norm | 26.27 | ± | 1.95 | ||
agieval_lsat_rc | 0 | acc | 25.28 | ± | 2.65 |
acc_norm | 22.68 | ± | 2.56 | ||
agieval_sat_en | 0 | acc | 48.06 | ± | 3.49 |
acc_norm | 36.89 | ± | 3.37 | ||
agieval_sat_en_without_passage | 0 | acc | 32.52 | ± | 3.27 |
acc_norm | 24.76 | ± | 3.01 | ||
agieval_sat_math | 0 | acc | 23.64 | ± | 2.87 |
acc_norm | 22.27 | ± | 2.81 |
Average: 25.76%
Average: Error: File does not exist%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 24.97 | ± | 1.52 |
mc2 | 39.22 | ± | 1.42 |
Average: 39.22%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 52.11 | ± | 3.63 |
bigbench_date_understanding | 0 | multiple_choice_grade | 62.60 | ± | 2.52 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 32.95 | ± | 2.93 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 10.03 | ± | 1.59 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 24.80 | ± | 1.93 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 17.43 | ± | 1.43 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 47.33 | ± | 2.89 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 34.40 | ± | 2.13 |
bigbench_navigate | 0 | multiple_choice_grade | 50.00 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 47.25 | ± | 1.12 |
bigbench_ruin_names | 0 | multiple_choice_grade | 22.99 | ± | 1.99 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 25.95 | ± | 1.39 |
bigbench_snarks | 0 | multiple_choice_grade | 44.20 | ± | 3.70 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 49.39 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 18.00 | ± | 1.22 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 20.72 | ± | 1.15 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 15.49 | ± | 0.87 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 47.33 | ± | 2.89 |
Average: 34.61%
Average score: Not available due to errors
Elapsed time: 02:45:40