Contributing to the Leaderboard¶
We welcome community contributions to the Leaderboard. To add your method:
Run your extraction method on the LitXAlloy benchmark dataset by calling
compare_experimentsandcompute_multi_level_metrics. An example is in the usage script.Open a pull request that adds your results as a new row to the leaderboard table in docs/index.rst. See this example PR for reference.
When updating docs/index.rst, please include:
A link to the code that generated the results
The file containing the output experiment objects from your run
Any publication you’d like linked
A link to the PR that submitted your result
The version of LitXAlloy it was evaluated on (this version is bumped when the dataset or evaluation methods change, so scores across different versions may not be directly comparable). You can get this with:
from litxbench.litxalloy import __version__ print(__version__) # e.g. "0.1.0"
Uncertainties are not required – if your method was only run once, simply report the score without a confidence interval.
Contributing to LitXBench¶
Contributions to LitXBench are welcome! Please open an issue or pull request on the GitHub repository.
Development Setup¶
git clone https://github.com/Radical-AI/litxbench.git
cd litxbench
uv sync --extra dev
If you want to replicate results from the paper you’ll need to add --group paper to install the required dependencies.