openteamsinc
pypi/lm-eval
Risk Profile
Package
Source
Risk Profile
Maturity:
Mature
Health:
Healthy
Legal:
Healthy
Security:
Healthy
Package
pypi
lm-eval
Version
0.4.9
Last Release Date
28 days ago
License
MIT
Dependencies
accelerate
>=0.26.0
evaluate
datasets
>=2.16.0
evaluate
>=0.4.0
jsonlines
numexpr
peft
>=0.2.0
pybind11
>=2.6.2
pytablewriter
rouge-score
>=0.0.4
sacrebleu
>=1.5.0
scikit-learn
>=0.24.1
sqlitedict
torch
>=1.8
tqdm-multiprocess
transformers
>=4.1
zstandard
dill
word2number
more_itertools
acpbench:
lark
>=1.1.9
tarski
[clingo]
==0.8.2
pddl
==0.4.2
kstar-planner
==1.4.2
api:
requests
aiohttp
tenacity
tqdm
tiktoken
audiolm-qwen:
librosa
soundfile
deepsparse:
deepsparse-nightly
[llm]
>=1.8.0.20240404
dev:
pytest
pytest-cov
pytest-xdist
pre-commit
mypy
unitxt
==1.22.0
requests
aiohttp
tenacity
tqdm
tiktoken
sentencepiece
gptq:
auto-gptq
[triton]
>=0.6.0
gptqmodel:
gptqmodel
>=1.0.9
hf-transfer:
hf_transfer
ibm-watsonx-ai:
ibm_watsonx_ai
>=1.1.22
python-dotenv
ifeval:
langdetect
immutabledict
nltk
>=3.9.1
ipex:
optimum
japanese-leaderboard:
emoji
==2.14.0
neologdn
==0.5.3
fugashi
[unidic-lite]
rouge_score
>=0.1.2
longbench:
jieba
fuzzywuzzy
rouge
mamba:
mamba_ssm
causal-conv1d
==1.0.2
torch
math:
sympy
>=1.12
antlr4-python3-runtime
==4.11
math_verify
[antlr4_11_0]
multilingual:
nagisa
>=0.2.7
jieba
>=0.42.1
pycountry
neuronx:
optimum
[neuronx]
optimum:
optimum
[openvino]
promptsource:
promptsource
>=0.2.3
ruler:
nltk
wonderwords
scipy
sae-lens:
sae_lens
sentencepiece:
sentencepiece
>=0.1.98
sparseml:
sparseml-nightly
[llm]
>=1.8.0.20240404
sparsify:
sparsify
testing:
pytest
pytest-cov
pytest-xdist
vllm:
vllm
>=0.4.2
wandb:
wandb
>=0.16.3
pandas
numpy
zeno:
pandas
zeno-client
all:
lm_eval
[acpbench]
lm_eval
[api]
lm_eval
[audiolm_qwen]
lm_eval
[deepsparse]
lm_eval
[dev]
lm_eval
[gptq]
lm_eval
[gptqmodel]
lm_eval
[hf_transfer]
lm_eval
[ibm_watsonx_ai]
lm_eval
[ifeval]
lm_eval
[ipex]
lm_eval
[japanese_leaderboard]
lm_eval
[longbench]
lm_eval
[mamba]
lm_eval
[math]
lm_eval
[multilingual]
lm_eval
[neuronx]
lm_eval
[optimum]
lm_eval
[promptsource]
lm_eval
[ruler]
lm_eval
[sae_lens]
lm_eval
[sentencepiece]
lm_eval
[sparseml]
lm_eval
[sparsify]
lm_eval
[testing]
lm_eval
[vllm]
lm_eval
[wandb]
lm_eval
[zeno]
Source
Location
EleutherAI/lm-evaluation-harness
Last Source Update
3 days ago
Licenses
MIT License
Distribution Destinations
pypi/lm-eval