Portdex

Models

B

bart-large-cnn

Beta
Summarization • facebook

BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.

B

bge-base-en-v1.5

Text Embeddings • baai

BAAI general embedding (Base) model that transforms any given text into a 768-dimensional vector

B

bge-large-en-v1.5

Text Embeddings • baai

BAAI general embedding (Large) model that transforms any given text into a 1024-dimensional vector

B

bge-m3

Text Embeddings • baai

Multi-Functionality, Multi-Linguality, and Multi-Granularity embeddings model.

B

bge-small-en-v1.5

Text Embeddings • baai

BAAI general embedding (Small) model that transforms any given text into a 384-dimensional vector

B

bge-reranker-base

Text Classification • baai

Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can use it to re-rank retrieved documents.

D

deepseek-coder-6.7b-base-awq

Beta
Text Generation • thebloke

Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language.

D

deepseek-coder-6.7b-instruct-awq

Beta
Text Generation • thebloke

Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language.

D

deepseek-math-7b-instruct

Beta
Text Generation • deepseek-ai

DeepSeekMath-Instruct 7B is a mathematically instructed tuning model derived from DeepSeekMath-Base 7B. DeepSeekMath is initialized with DeepSeek-Coder weights.

D

deepseek-r1-distill-qwen-32b

Text Generation • deepseek-ai

DeepSeek-R1-Distill-Qwen-32B is a model distilled from DeepSeek-R1 based on Qwen2.5. It outperforms OpenAI-o1-mini across various benchmarks.

F

detr-resnet-50

Beta
Object Detection • facebook

Detection Transformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images).

😀

distilbert-sst-2-int8

Text Classification • HuggingFace

Distilled BERT model that was finetuned on SST-2 for sentiment classification

T

discolm-german-7b-v1-awq

Beta
Text Generation • thebloke

DiscoLM German 7b is a Mistral-based large language model with a focus on German-language applications. AWQ is an efficient, accurate quantization method.

L

dreamshaper-8-lcm

Beta
Text-to-Image • lykon

Stable Diffusion model that has been fine-tuned to be better at photorealism without sacrificing range.