Skip to main content

LLM Models

Open Source/Weight Models

modeldatectxnotes
Qwen22024-06-0732k,64k,128k0.5, 1.5, 7, 57, 72 B by Alibaba
LLAMA32024-04-188Kby Meta
phi32024by Microsoft
gemma2024by Google DeepMind
mistral2024by Mistral AI
LLAMA220234Kby Meta
GPT-320202k175B
GPT-220191.5B
GPT-120180.12B

Proprietary Models

modeldatenotes
GPT-3.5-turbo20224K
GPT-3.5-16k202216K
GPT-3.52022ChatGPT,570GB Text
GPT-42023
GPT-4-32k2023
GPT-4V2023
GPT-4o2023

# AVX = 1 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
grep avx /proc/cpuinfo --color # x86_64

中文