README.md
:book: folktexts
This package is the basis for our NeurIPS’24 paper titled “Evaluating language models as risk scores”
Folktexts is a suite of Q&A datasets with natural outcome uncertainty, aimed at evaluating LLMs’ calibration on unrealizable tasks.
The folktexts
python package enables computing and evaluating classification risk scores for tabular prediction tasks using LLMs.
Several benchmark tasks are provided based on data from the American Community Survey. Namely, each tabular prediction task from the popular folktables package is made available as a natural-language Q&A task.
Parsed and ready-to-use versions of each folktexts dataset can be found on Huggingface.
Package documentation can be found here.
Table of contents:
Getting started
Installing
Install package from PyPI:
pip install folktexts
Basic setup
Go through the following steps to run the benchmark tasks. Alternatively, if you only want ready-to-use datasets, see this section.
Create conda environment
conda create -n folktexts python=3.11
conda activate folktexts
Install folktexts package
pip install folktexts
Create models dataset and results folder
mkdir results
mkdir models
mkdir data
Download transformers model and tokenizer
download_models --model 'google/gemma-2b' --save-dir models
Run benchmark on a given task
run_acs_benchmark --results-dir results --data-dir data --task 'ACSIncome' --model models/google--gemma-2b
Run run_acs_benchmark --help
to get a list of all available benchmark flags.
Ready-to-use datasets
Ready-to-use Q&A datasets generated from the 2018 American Community Survey are available via datasets.
import datasets
acs_task_qa = datasets.load_dataset(
path="acruz/folktexts",
name="ACSIncome", # Choose which task you want to load
split="test") # Choose split according to your intended use case
Example usage
Example code snippet that loads a pre-trained model, collects and parses Q&A data for the income-prediction task, and computes risk scores on the test split.
# Load transformers model
from folktexts.llm_utils import load_model_tokenizer
model, tokenizer = load_model_tokenizer("gpt2") # using tiny model as an example
from folktexts.acs import ACSDataset
acs_task_name = "ACSIncome" # Name of the benchmark ACS task to use
# Create an object that classifies data using an LLM
from folktexts import TransformersLLMClassifier
clf = TransformersLLMClassifier(
model=model,
tokenizer=tokenizer,
task=acs_task_name,
)
# NOTE: You can also use a web-hosted model like GPT4 using the `WebAPILLMClassifier` class
# Use a dataset or feed in your own data
dataset = ACSDataset.make_from_task(acs_task_name) # use `.subsample(0.01)` to get faster approximate results
# You can compute risk score predictions using an sklearn-style interface
X_test, y_test = dataset.get_test()
test_scores = clf.predict_proba(X_test)
If you only care about the overall benchmark results and not individual predictions,
you can simply run the following code instead of using .predict_proba()
directly:
from folktexts.benchmark import Benchmark, BenchmarkConfig
bench = Benchmark.make_benchmark(
task=acs_task_name, dataset=dataset, # These vars are defined in the snippet above
model=model, tokenizer=tokenizer,
numeric_risk_prompting=True, # See the full list of configs below in the README
)
bench_results = bench.run(results_root_dir="results")
Example snippet showcasing how to fit the binarization threshold on a few training samples
(note that this is not fine-tuning), and obtaining discretized predictions using .predict()
.
# Optionally, you can fit the threshold based on a few samples
clf.fit(*dataset[0:100]) # (`dataset[...]` will access training data)
# ...in order to get more accurate binary predictions with `.predict`
test_preds = clf.predict(X_test)
Benchmark features and options
Here’s a summary list of the most important benchmark options/flags used in
conjunction with the run_acs_benchmark
command line script, or with the
Benchmark
class.
Option |
Description |
Examples |
---|---|---|
|
Name of the model on huggingface transformers, or local path to folder with pretrained model and tokenizer. Can also use web-hosted models with |
|
|
Name of the ACS task to run benchmark on. |
|
|
Path to directory under which benchmark results will be saved. |
|
|
Root folder to find datasets in (or download ACS data to). |
|
|
Whether to use verbalized numeric risk prompting, i.e., directly query model for a probability estimate. By default will use standard multiple-choice Q&A, and extract risk scores from internal token probabilities. |
Boolean flag ( |
|
Whether the given |
Boolean flag ( |
|
Which fraction of the dataset to use for the benchmark. By default will use the whole test set. |
|
|
Whether to use the given number of samples to fit the binarization threshold. By default will use a fixed $t=0.5$ threshold instead of fitting on data. |
|
|
The number of samples to process in each inference batch. Choose according to your available VRAM. |
|
Full list of options:
usage: run_acs_benchmark [-h] --model MODEL --results-dir RESULTS_DIR --data-dir DATA_DIR [--task TASK] [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--use-web-api-model] [--dont-correct-order-bias] [--numeric-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset USE_FEATURE_SUBSET]
[--use-population-filter USE_POPULATION_FILTER] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
Benchmark risk scores produced by a language model on ACS data.
options:
-h, --help show this help message and exit
--model MODEL [str] Model name or path to model saved on disk
--results-dir RESULTS_DIR
[str] Directory under which this experiment's results will be saved
--data-dir DATA_DIR [str] Root folder to find datasets on
--task TASK [str] Name of the ACS task to run the experiment on
--few-shot FEW_SHOT [int] Use few-shot prompting with the given number of shots
--batch-size BATCH_SIZE
[int] The batch size to use for inference
--context-size CONTEXT_SIZE
[int] The maximum context size when prompting the LLM
--fit-threshold FIT_THRESHOLD
[int] Whether to fit the prediction threshold, and on how many samples
--subsampling SUBSAMPLING
[float] Which fraction of the dataset to use (if omitted will use all data)
--seed SEED [int] Random seed -- to set for reproducibility
--use-web-api-model [bool] Whether use a model hosted on a web API (instead of a local model)
--dont-correct-order-bias
[bool] Whether to avoid correcting ordering bias, by default will correct it
--numeric-risk-prompting
[bool] Whether to prompt for numeric risk-estimates instead of multiple-choice Q&A
--reuse-few-shot-examples
[bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)
--use-feature-subset USE_FEATURE_SUBSET
[str] Optional subset of features to use for prediction, comma separated
--use-population-filter USE_POPULATION_FILTER
[str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.
--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
[str] The logging level to use for the experiment
Evaluating feature importance
By evaluating LLMs on tabular classification tasks, we can use standard feature importance methods to assess which features the model uses to compute risk scores.
You can do so yourself by calling folktexts.cli.eval_feature_importance
(add --help
for a full list of options).
Here’s an example for the Llama3-70B-Instruct model on the ACSIncome task (warning: takes 24h on an Nvidia H100):
python -m folktexts.cli.eval_feature_importance --model 'meta-llama/Meta-Llama-3-70B-Instruct' --task ACSIncome --subsampling 0.1
This script uses sklearn’s permutation_importance
to assess which features contribute the most for the ROC AUC metric (other metrics can be assessed using the --scorer [scorer]
parameter).
FAQ
Q: Can I use
folktexts
with a different dataset?A: Yes! Folktexts provides the whole ML pipeline needed to produce risk scores using LLMs, together with a few example ACS datasets. You can easily apply these same utilities to a different dataset following the example jupyter notebook.
Q: How do I create a custom prediction task based on American Community Survey data?
A: Simply create a new
TaskMetadata
object with the parameters you want. Follow the example jupyter notebook for more details.Q: Can I use
folktexts
with closed-source models?A: Yes! We provide compatibility with local LLMs via 🤗 transformers and compatibility with web-hosted LLMs via litellm. For example, you can use
--model='gpt-4o' --use-web-api-model
to use GPT-4o when calling therun_acs_benchmark
script. Here’s a complete list of compatible OpenAI models. Note that some models are not compatible as they don’t enable access to log-probabilities. Using models through a web API requires installing extra optional dependencies withpip install 'folktexts[apis]'
.Q: Can I use
folktexts
to fine-tune LLMs on survey prediction tasks?A: The package does not feature specific fine-tuning functionality, but you can use the data and Q&A prompts generated by
folktexts
to fine-tune an LLM for a specific prediction task.
Citation
@inproceedings{cruz2024evaluating,
title={Evaluating language models as risk scores},
author={Andr\'{e} F. Cruz and Moritz Hardt and Celestine Mendler-D\"{u}nner},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=qrZxL3Bto9}
}
License and terms of use
Code licensed under the MIT license.
The American Community Survey (ACS) Public Use Microdata Sample (PUMS) is governed by the U.S. Census Bureau terms of service.