modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-05 06:28:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 552
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-05 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MaziyarPanahi/uncertain_llama3_8b-GGUF
|
MaziyarPanahi
| 2024-11-01T22:45:12Z | 28 | 1 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:rhyang2021/uncertain_llama3_8b",
"base_model:quantized:rhyang2021/uncertain_llama3_8b",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T22:22:03Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: uncertain_llama3_8b-GGUF
base_model: rhyang2021/uncertain_llama3_8b
inference: false
model_creator: rhyang2021
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/uncertain_llama3_8b-GGUF](https://huggingface.co/MaziyarPanahi/uncertain_llama3_8b-GGUF)
- Model creator: [rhyang2021](https://huggingface.co/rhyang2021)
- Original model: [rhyang2021/uncertain_llama3_8b](https://huggingface.co/rhyang2021/uncertain_llama3_8b)
## Description
[MaziyarPanahi/uncertain_llama3_8b-GGUF](https://huggingface.co/MaziyarPanahi/uncertain_llama3_8b-GGUF) contains GGUF format model files for [rhyang2021/uncertain_llama3_8b](https://huggingface.co/rhyang2021/uncertain_llama3_8b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
mrcuddle/Lumimaid-v0.2-12B-Pixtral
|
mrcuddle
| 2024-11-01T22:44:32Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"conversational",
"en",
"arxiv:1910.09700",
"base_model:NeverSleep/Lumimaid-v0.2-12B",
"base_model:finetune:NeverSleep/Lumimaid-v0.2-12B",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-27T10:29:07Z |
---
language:
- en
base_model:
- mistral-community/pixtral-12b
- NeverSleep/Lumimaid-v0.2-12B
pipeline_tag: image-text-to-text
library_name: transformers
---
# Model Card for Model ID
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-12b-deduped-int3-step256-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T22:32:19Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T22:22:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Paradigmo/aashwin-lora-2
|
Paradigmo
| 2024-11-01T22:27:27Z | 5 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:51:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Aashwin_Shrivastava
---
# Aashwin Lora 2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Aashwin_Shrivastava` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Paradigmo/aashwin-lora-2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Xu-Ouyang/pythia-1.4b-deduped-int3-step1000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T22:20:12Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T22:19:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf
|
RichardErkhov
| 2024-11-01T22:09:42Z | 34 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T20:34:31Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Falcon2-5.5B-Dutch - GGUF
- Model creator: https://huggingface.co/ssmits/
- Original model: https://huggingface.co/ssmits/Falcon2-5.5B-Dutch/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Falcon2-5.5B-Dutch.Q2_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q2_K.gguf) | Q2_K | 2.03GB |
| [Falcon2-5.5B-Dutch.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q3_K_S.gguf) | Q3_K_S | 2.35GB |
| [Falcon2-5.5B-Dutch.Q3_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q3_K.gguf) | Q3_K | 2.56GB |
| [Falcon2-5.5B-Dutch.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q3_K_M.gguf) | Q3_K_M | 2.56GB |
| [Falcon2-5.5B-Dutch.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q3_K_L.gguf) | Q3_K_L | 2.72GB |
| [Falcon2-5.5B-Dutch.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.IQ4_XS.gguf) | IQ4_XS | 2.87GB |
| [Falcon2-5.5B-Dutch.Q4_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q4_0.gguf) | Q4_0 | 2.99GB |
| [Falcon2-5.5B-Dutch.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.IQ4_NL.gguf) | IQ4_NL | 3.01GB |
| [Falcon2-5.5B-Dutch.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q4_K_S.gguf) | Q4_K_S | 2.99GB |
| [Falcon2-5.5B-Dutch.Q4_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q4_K.gguf) | Q4_K | 3.19GB |
| [Falcon2-5.5B-Dutch.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q4_K_M.gguf) | Q4_K_M | 3.19GB |
| [Falcon2-5.5B-Dutch.Q4_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q4_1.gguf) | Q4_1 | 3.29GB |
| [Falcon2-5.5B-Dutch.Q5_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q5_0.gguf) | Q5_0 | 3.6GB |
| [Falcon2-5.5B-Dutch.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q5_K_S.gguf) | Q5_K_S | 3.6GB |
| [Falcon2-5.5B-Dutch.Q5_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q5_K.gguf) | Q5_K | 3.8GB |
| [Falcon2-5.5B-Dutch.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q5_K_M.gguf) | Q5_K_M | 3.8GB |
| [Falcon2-5.5B-Dutch.Q5_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q5_1.gguf) | Q5_1 | 3.9GB |
| [Falcon2-5.5B-Dutch.Q6_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q6_K.gguf) | Q6_K | 4.24GB |
| [Falcon2-5.5B-Dutch.Q8_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Dutch-gguf/blob/main/Falcon2-5.5B-Dutch.Q8_0.gguf) | Q8_0 | 5.41GB |
Original model description:
---
base_model:
- tiiuae/falcon-11B
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
license: apache-2.0
language:
- nl
---
## Why prune?
Even though [Falcon-11B](https://huggingface.co/tiiuae/falcon-11B) is trained on 5T tokens, it is still undertrained, as can be seen by this graph:

This is why the choice is made to prune 50% of the layers.
Note that \~1B of continued pre-training (\~1M rows of 1k tokens) is still required to restore the perplexity of this model in the desired language.
I'm planning on doing that for certain languages, depending on how much compute will be available.
# sliced
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [tiiuae/falcon-11B](https://huggingface.co/tiiuae/falcon-11B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: tiiuae/falcon-11B
layer_range: [0, 25]
- sources:
- model: tiiuae/falcon-11B
layer_range: [56, 59]
merge_method: passthrough
dtype: bfloat16
```
[PruneMe](https://github.com/arcee-ai/PruneMe) has been utilized using the wikimedia/wikipedia Dutch (nl) subset by investigating layer similarity with 2000 samples. The layer ranges for pruning were determined based on this analysis to maintain performance while reducing model size.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "ssmits/Falcon2-5.5B-Dutch"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
)
sequences = pipeline(
"Can you explain the concepts of Quantum Computing?",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
π₯ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
## Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
## Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon2-5.5B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
## Recommendations
We recommend users of Falcon2-5.5B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
|
RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf
|
RichardErkhov
| 2024-11-01T22:08:41Z | 32 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T20:32:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Falcon2-5.5B-German - GGUF
- Model creator: https://huggingface.co/ssmits/
- Original model: https://huggingface.co/ssmits/Falcon2-5.5B-German/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Falcon2-5.5B-German.Q2_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q2_K.gguf) | Q2_K | 2.03GB |
| [Falcon2-5.5B-German.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q3_K_S.gguf) | Q3_K_S | 2.35GB |
| [Falcon2-5.5B-German.Q3_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q3_K.gguf) | Q3_K | 2.56GB |
| [Falcon2-5.5B-German.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q3_K_M.gguf) | Q3_K_M | 2.56GB |
| [Falcon2-5.5B-German.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q3_K_L.gguf) | Q3_K_L | 2.72GB |
| [Falcon2-5.5B-German.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.IQ4_XS.gguf) | IQ4_XS | 2.87GB |
| [Falcon2-5.5B-German.Q4_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q4_0.gguf) | Q4_0 | 2.99GB |
| [Falcon2-5.5B-German.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.IQ4_NL.gguf) | IQ4_NL | 3.01GB |
| [Falcon2-5.5B-German.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q4_K_S.gguf) | Q4_K_S | 2.99GB |
| [Falcon2-5.5B-German.Q4_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q4_K.gguf) | Q4_K | 3.19GB |
| [Falcon2-5.5B-German.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q4_K_M.gguf) | Q4_K_M | 3.19GB |
| [Falcon2-5.5B-German.Q4_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q4_1.gguf) | Q4_1 | 3.29GB |
| [Falcon2-5.5B-German.Q5_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q5_0.gguf) | Q5_0 | 3.6GB |
| [Falcon2-5.5B-German.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q5_K_S.gguf) | Q5_K_S | 3.6GB |
| [Falcon2-5.5B-German.Q5_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q5_K.gguf) | Q5_K | 3.8GB |
| [Falcon2-5.5B-German.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q5_K_M.gguf) | Q5_K_M | 3.8GB |
| [Falcon2-5.5B-German.Q5_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q5_1.gguf) | Q5_1 | 3.9GB |
| [Falcon2-5.5B-German.Q6_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q6_K.gguf) | Q6_K | 4.24GB |
| [Falcon2-5.5B-German.Q8_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-German-gguf/blob/main/Falcon2-5.5B-German.Q8_0.gguf) | Q8_0 | 5.41GB |
Original model description:
---
base_model:
- tiiuae/falcon-11B
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
license: apache-2.0
language:
- de
---
## Why prune?
Even though [Falcon-11B](https://huggingface.co/tiiuae/falcon-11B) is trained on 5T tokens, it is still undertrained, as can be seen by this graph:

This is why the choice is made to prune 50% of the layers.
Note that \~1B of continued pre-training (\~1M rows of 1k tokens) is still required to restore the perplexity of this model in the desired language.
I'm planning on doing that for certain languages, depending on how much compute will be available.
# sliced
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [tiiuae/falcon-11B](https://huggingface.co/tiiuae/falcon-11B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: tiiuae/falcon-11B
layer_range: [0, 24]
- sources:
- model: tiiuae/falcon-11B
layer_range: [55, 59]
merge_method: passthrough
dtype: bfloat16
```
[PruneMe](https://github.com/arcee-ai/PruneMe) has been utilized using the wikimedia/wikipedia German (de) subset by investigating layer similarity with 2000 samples. The layer ranges for pruning were determined based on this analysis to maintain performance while reducing model size.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "ssmits/Falcon2-5.5B-German"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
)
sequences = pipeline(
"Can you explain the concepts of Quantum Computing?",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
π₯ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
## Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
## Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon2-5.5B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
## Recommendations
We recommend users of Falcon2-5.5B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
|
Xu-Ouyang/pythia-1.4b-deduped-int4-step512-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T22:07:01Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T22:06:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/Heart_Stolen-8B-task-GGUF
|
MaziyarPanahi
| 2024-11-01T22:03:55Z | 56 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:allknowingroger/Heart_Stolen-8B-task",
"base_model:quantized:allknowingroger/Heart_Stolen-8B-task",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T21:41:28Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Heart_Stolen-8B-task-GGUF
base_model: allknowingroger/Heart_Stolen-8B-task
inference: false
model_creator: allknowingroger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Heart_Stolen-8B-task-GGUF](https://huggingface.co/MaziyarPanahi/Heart_Stolen-8B-task-GGUF)
- Model creator: [allknowingroger](https://huggingface.co/allknowingroger)
- Original model: [allknowingroger/Heart_Stolen-8B-task](https://huggingface.co/allknowingroger/Heart_Stolen-8B-task)
## Description
[MaziyarPanahi/Heart_Stolen-8B-task-GGUF](https://huggingface.co/MaziyarPanahi/Heart_Stolen-8B-task-GGUF) contains GGUF format model files for [allknowingroger/Heart_Stolen-8B-task](https://huggingface.co/allknowingroger/Heart_Stolen-8B-task).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
mradermacher/Puffin-Qwen2.5-TIES-GGUF
|
mradermacher
| 2024-11-01T22:02:06Z | 66 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:tuanpasg/Puffin-Qwen2.5-TIES",
"base_model:quantized:tuanpasg/Puffin-Qwen2.5-TIES",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T17:27:00Z |
---
base_model: tuanpasg/Puffin-Qwen2.5-TIES
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tuanpasg/Puffin-Qwen2.5-TIES
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-TIES-GGUF/resolve/main/Puffin-Qwen2.5-TIES.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sallyakti/learn_hf_food_or_not_food_text_classifier-distilbert-base-uncased
|
sallyakti
| 2024-11-01T21:55:23Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-31T19:12:14Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_or_not_food_text_classifier-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_or_not_food_text_classifier-distilbert-base-uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4263 | 1.0 | 7 | 0.1215 | 1.0 |
| 0.0757 | 2.0 | 14 | 0.0092 | 1.0 |
| 0.0074 | 3.0 | 21 | 0.0025 | 1.0 |
| 0.0023 | 4.0 | 28 | 0.0013 | 1.0 |
| 0.0013 | 5.0 | 35 | 0.0008 | 1.0 |
| 0.001 | 6.0 | 42 | 0.0007 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0006 | 1.0 |
| 0.0007 | 8.0 | 56 | 0.0005 | 1.0 |
| 0.0006 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0006 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Axion004/dummy-model
|
Axion004
| 2024-11-01T21:51:49Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-06-28T01:43:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DeZoomer/AnyaTaylorJoy-FluxLora
|
DeZoomer
| 2024-11-01T21:51:33Z | 1,889 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:49:47Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/233152_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/232002_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/232002_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/232003_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/232003_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/232003_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/232004_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/233151_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/233151_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/232002_-1_0_image_4_share_00002.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Anya Taylor-Joy | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/AnyaTaylorJoy-FluxLora/tree/main) them in the Files & versions tab.
|
DeZoomer/AlexandraDaddario-FluxLora
|
DeZoomer
| 2024-11-01T21:48:14Z | 195 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:44:23Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/193143_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/193143_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/194215_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/193117_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/193143_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/193144_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/193144_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/194214_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/194214_-1_0_image_4_share_00003.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Alexandra Daddario | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/AlexandraDaddario-FluxLora/tree/main) them in the Files & versions tab.
|
mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF
|
mradermacher
| 2024-11-01T21:47:08Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:tuanpasg/Puffin-Qwen2.5-CodeMath-1",
"base_model:quantized:tuanpasg/Puffin-Qwen2.5-CodeMath-1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T17:23:40Z |
---
base_model: tuanpasg/Puffin-Qwen2.5-CodeMath-1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tuanpasg/Puffin-Qwen2.5-CodeMath-1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF
|
mradermacher
| 2024-11-01T21:47:07Z | 439 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:tuanpasg/Puffin-Qwen2.5-CodeMath-1",
"base_model:quantized:tuanpasg/Puffin-Qwen2.5-CodeMath-1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T21:30:34Z |
---
base_model: tuanpasg/Puffin-Qwen2.5-CodeMath-1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/tuanpasg/Puffin-Qwen2.5-CodeMath-1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Puffin-Qwen2.5-CodeMath-1-i1-GGUF/resolve/main/Puffin-Qwen2.5-CodeMath-1.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Xu-Ouyang/pythia-12b-deduped-int4-step20000-AWQ
|
Xu-Ouyang
| 2024-11-01T21:44:01Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T21:42:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DeZoomer/AnaDeArmas-FluxLora
|
DeZoomer
| 2024-11-01T21:42:59Z | 60 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:40:05Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/210609_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/210609_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/210610_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/210609_-1_0_image_4_share_00004.webp
- text: '-'
output:
url: images/210611_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/210612_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/210612_-1_0_image_4_share_00004.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Ana de Armas | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/AnaDeArmas-FluxLora/tree/main) them in the Files & versions tab.
|
DeZoomer/EmiliaClarke-FluxLora
|
DeZoomer
| 2024-11-01T21:31:29Z | 263 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:30:01Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/204502_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/204505_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/204505_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/204506_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/204506_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/204506_-1_0_image_4_share_00004.webp
- text: '-'
output:
url: images/204507_-1_0_image_4_share_00001.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Emilia Clarke | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/EmiliaClarke-FluxLora/tree/main) them in the Files & versions tab.
|
DeZoomer/MeganFox-FluxLora
|
DeZoomer
| 2024-11-01T21:28:29Z | 27 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:26:39Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/223329_-1_0_image_4_share_00004.webp
- text: '-'
output:
url: images/223328_-1_0_image_4_share_00006.webp
- text: '-'
output:
url: images/223329_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/223328_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/223330_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/223331_-1_0_image_4_share_00001.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Megan Fox | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/MeganFox-FluxLora/tree/main) them in the Files & versions tab.
|
DeZoomer/DuaLipa-FluxLora
|
DeZoomer
| 2024-11-01T21:24:31Z | 239 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:22:58Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/024255_-1_0_image_4_share_00004.webp
- text: '-'
output:
url: images/222141_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/222142_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/223017_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/223253_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/223459_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/223903_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/225144_-1_0_image_4_share_00001.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Dua Lipa | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/DuaLipa-FluxLora/tree/main) them in the Files & versions tab.
|
MaziyarPanahi/Llama-3.2-3B-Booval-GGUF
|
MaziyarPanahi
| 2024-11-01T21:23:51Z | 56 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:bunnycore/Llama-3.2-3B-Booval",
"base_model:quantized:bunnycore/Llama-3.2-3B-Booval",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T21:14:30Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.2-3B-Booval-GGUF
base_model: bunnycore/Llama-3.2-3B-Booval
inference: false
model_creator: bunnycore
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.2-3B-Booval-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Booval-GGUF)
- Model creator: [bunnycore](https://huggingface.co/bunnycore)
- Original model: [bunnycore/Llama-3.2-3B-Booval](https://huggingface.co/bunnycore/Llama-3.2-3B-Booval)
## Description
[MaziyarPanahi/Llama-3.2-3B-Booval-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Booval-GGUF) contains GGUF format model files for [bunnycore/Llama-3.2-3B-Booval](https://huggingface.co/bunnycore/Llama-3.2-3B-Booval).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
DeZoomer/CamilaQueiroz-FluxLora
|
DeZoomer
| 2024-11-01T21:21:14Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T21:19:33Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/214354_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/214354_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/214353_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/214357_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/214357_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/214358_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/214358_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/214358_-1_0_image_4_share_00004.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Camila Queiroz | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good newsβcommissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so Iβm not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are openβcheck out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/CamilaQueiroz-FluxLora/tree/main) them in the Files & versions tab.
|
DouglasBraga/swin-tiny-patch4-window7-224-swin-tiny-patch4-window7-224-finetuned-leukemia.v2.1
|
DouglasBraga
| 2024-11-01T21:19:36Z | 216 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-30T23:58:39Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-swin-tiny-patch4-window7-224-finetuned-leukemia.v2.1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.954
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-swin-tiny-patch4-window7-224-finetuned-leukemia.v2.1
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1379
- Accuracy: 0.954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4215 | 0.9991 | 281 | 0.3880 | 0.8293 |
| 0.3137 | 1.9982 | 562 | 0.2898 | 0.8788 |
| 0.2631 | 2.9973 | 843 | 0.2382 | 0.907 |
| 0.2338 | 4.0 | 1125 | 0.4090 | 0.8575 |
| 0.1834 | 4.9991 | 1406 | 0.2477 | 0.8985 |
| 0.2065 | 5.9982 | 1687 | 0.1331 | 0.9513 |
| 0.1555 | 6.9973 | 1968 | 0.1304 | 0.9473 |
| 0.1521 | 8.0 | 2250 | 0.1837 | 0.9293 |
| 0.1512 | 8.9991 | 2531 | 0.1708 | 0.9405 |
| 0.119 | 9.9911 | 2810 | 0.1379 | 0.954 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Tasneem10/Llama3.2-1B-instruct-fc
|
Tasneem10
| 2024-11-01T21:15:35Z | 142 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T07:51:18Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shellzero/gemma2-2b-ft-law-data-tag-generation
|
shellzero
| 2024-11-01T21:10:56Z | 6 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma2",
"legal",
"en",
"dataset:ymoslem/Law-StackExchange",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:mit",
"region:us"
] | null | 2024-10-29T20:50:46Z |
---
license: mit
datasets:
- ymoslem/Law-StackExchange
language:
- en
metrics:
- f1
base_model:
- google/gemma-2-2b
library_name: mlx
tags:
- legal
widget:
- text: |
<start_of_turn>user
## Instructions
You are a helpful AI assistant.
## User
How to make scrambled eggs?<end_of_turn>
<start_of_turn>model
---
# shellzero/gemma2-2b-ft-law-data-tag-generation
This model was converted to MLX format from [`google/gemma-7b-it`]().
Refer to the [original model card](https://huggingface.co/google/gemma-7b-it) for more details on the model.
```zsh
pip install mlx-lm
```
The model was LoRA fine-tuned on the [ymoslem/Law-StackExchange](https://huggingface.co/datasets/ymoslem/Law-StackExchange) and Synthetic data generated from
GPT-4o and GPT-35-Turbo using the format below, for 1500 steps using `mlx`.
This fine tune was one of the best runs with our data and achieved high F1 score on our eval dataset. (Part of the Nvidia hackathon)
```python
def format_prompt(system_prompt: str, title: str, question: str) -> str:
"Format the question to the format of the dataset we fine-tuned to."
return """<bos><start_of_turn>user
## Instructions
{}
## User
TITLE:
{}
QUESTION:
{}<end_of_turn>
<start_of_turn>model
""".format(
system_prompt, title, question
)
```
Here's an example of the system_prompt from the dataset:
```text
Read the following title and question about a legal issue and assign the most appropriate tag to it. All tags must be in lowercase, ordered lexicographically and separated by commas.
```
## Loading the model using `mlx_lm`
```python
from mlx_lm import generate, load
model, tokenizer = load("shellzero/gemma2-2b-ft-law-data-tag-generation")
response = generate(
model,
tokenizer,
prompt=format_prompt(system_prompt, question),
verbose=True, # Set to True to see the prompt and response
temp=0.0,
max_tokens=32,
)
```
|
paritoshksu2024/customMedicine-llm
|
paritoshksu2024
| 2024-11-01T21:08:48Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-01T20:57:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/MFANN-Llama3.1-Abliterated-Slerp-TIES-GGUF
|
MaziyarPanahi
| 2024-11-01T21:05:37Z | 37 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES",
"base_model:quantized:netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T20:42:27Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: MFANN-Llama3.1-Abliterated-Slerp-TIES-GGUF
base_model: netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES
inference: false
model_creator: netcat420
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MFANN-Llama3.1-Abliterated-Slerp-TIES-GGUF](https://huggingface.co/MaziyarPanahi/MFANN-Llama3.1-Abliterated-Slerp-TIES-GGUF)
- Model creator: [netcat420](https://huggingface.co/netcat420)
- Original model: [netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES](https://huggingface.co/netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES)
## Description
[MaziyarPanahi/MFANN-Llama3.1-Abliterated-Slerp-TIES-GGUF](https://huggingface.co/MaziyarPanahi/MFANN-Llama3.1-Abliterated-Slerp-TIES-GGUF) contains GGUF format model files for [netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES](https://huggingface.co/netcat420/MFANN-Llama3.1-Abliterated-Slerp-TIES).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
pszemraj/tFINE-850m-24x24-instruct-L2
|
pszemraj
| 2024-11-01T21:02:09Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"instruct",
"en",
"dataset:pszemraj/infinity-instruct-7m-T2T_en",
"base_model:pszemraj/tFINE-850m-24x24-v0.5-instruct-L1",
"base_model:finetune:pszemraj/tFINE-850m-24x24-v0.5-instruct-L1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-31T21:56:56Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: pszemraj/tFINE-850m-24x24-v0.5-instruct-L1
tags:
- instruct
datasets:
- pszemraj/infinity-instruct-7m-T2T_en
pipeline_tag: text2text-generation
---
# tFINE-850m-24x24-instruct-L2
This model is a fine-tuned version of [pszemraj/tFINE-850m-24x24-v0.5-instruct-L1](https://huggingface.co/pszemraj/tFINE-850m-24x24-v0.5-instruct-L1) on the pszemraj/infinity-instruct-7m-T2T_en dataset (config `deduped-L2`).
It achieves the following results on the evaluation set:
- Loss: 1.2542
- Num Input Tokens Seen: 750938410
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 17868
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.PAGED_ADEMAMIX and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
|
Jeasun/detr-resnet-50_finetuned_cppe5
|
Jeasun
| 2024-11-01T21:01:08Z | 219 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-11-01T20:48:50Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Kraken/Koda-LoRuffington
|
Kraken
| 2024-11-01T20:49:29Z | 12 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2024-11-01T20:48:31Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ffa67262-772e-4096-a483-718a6191c7fa.png
- text: '-'
output:
url: images/d533b6f7-f67b-4a36-89db-7797d8411680.png
- text: '-'
output:
url: images/895720ef-4861-45a0-a2a8-b38d0fdd6b8d.png
- text: '-'
output:
url: images/9a2e4723-7763-4a43-97ed-f912e1657abd.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Koda
license: unknown
---
# Koda-LoRuffington
<Gallery />
## Model description
A meme generator for the Koda Fluffington Rune
## Trigger words
You should use `Koda` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Kraken/Koda-LoRuffington/tree/main) them in the Files & versions tab.
|
harsha19/paola
|
harsha19
| 2024-11-01T20:49:02Z | 97 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-06T22:34:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rups
---
# Rupss
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rups` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('harshasai-dev/rupss', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
roottt/bert-finetuned-ner
|
roottt
| 2024-11-01T20:47:23Z | 115 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-31T21:23:37Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9347
- Recall: 0.9514
- F1: 0.9430
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0745 | 1.0 | 1756 | 0.0692 | 0.8991 | 0.9297 | 0.9141 | 0.9813 |
| 0.0339 | 2.0 | 3512 | 0.0674 | 0.9357 | 0.9472 | 0.9414 | 0.9857 |
| 0.0223 | 3.0 | 5268 | 0.0607 | 0.9347 | 0.9514 | 0.9430 | 0.9867 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
glif-loradex-trainer/maxxd4240_PleinAir
|
glif-loradex-trainer
| 2024-11-01T20:32:59Z | 46 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2024-11-01T20:32:17Z |
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730493000723__000003000_0.jpg
text: italian cafe P1e!n
- output:
url: samples/1730493024360__000003000_1.jpg
text: london streets P1e!n
- output:
url: samples/1730493047956__000003000_2.jpg
text: statue of liberty P1e!n
- output:
url: samples/1730493071568__000003000_3.jpg
text: rice fields P1e!n
- output:
url: samples/1730493095161__000003000_4.jpg
text: hindu temple P1e!n
- output:
url: samples/1730493118971__000003000_5.jpg
text: buddha statue in templeP1e!n
base_model: black-forest-labs/FLUX.1-dev
trigger: P1e!n
instance_prompt: P1e!n
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# PleinAir
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `maxxd4240`.
<Gallery />
## Trigger words
You should use `P1e!n` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/maxxd4240_PleinAir/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
MaziyarPanahi/L3.1-Promissum_Mane-8B-Della-1.5-calc-GGUF
|
MaziyarPanahi
| 2024-11-01T20:25:29Z | 108 | 1 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc",
"base_model:quantized:djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T20:01:20Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: L3.1-Promissum_Mane-8B-Della-1.5-calc-GGUF
base_model: djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc
inference: false
model_creator: djuna
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/L3.1-Promissum_Mane-8B-Della-1.5-calc-GGUF](https://huggingface.co/MaziyarPanahi/L3.1-Promissum_Mane-8B-Della-1.5-calc-GGUF)
- Model creator: [djuna](https://huggingface.co/djuna)
- Original model: [djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc](https://huggingface.co/djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc)
## Description
[MaziyarPanahi/L3.1-Promissum_Mane-8B-Della-1.5-calc-GGUF](https://huggingface.co/MaziyarPanahi/L3.1-Promissum_Mane-8B-Della-1.5-calc-GGUF) contains GGUF format model files for [djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc](https://huggingface.co/djuna/L3.1-Promissum_Mane-8B-Della-1.5-calc).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
henilp105/InjecAgent-Llama-3.1-8B-Instruct-optim-10
|
henilp105
| 2024-11-01T20:22:55Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2024-11-01T13:09:54Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
Xu-Ouyang/pythia-1.4b-deduped-int3-step32-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T20:12:22Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T20:12:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lucataco/SD3.5-Large-yarn-2
|
lucataco
| 2024-11-01T19:44:00Z | 6 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"replicate",
"template:sd-lora",
"sd3.5-large",
"sd3.5",
"sd3.5-diffusers",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T19:21:43Z |
---
license: other
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- replicate
- template:sd-lora
- sd3.5-large
- sd3.5
- sd3.5-diffusers
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: Frog, yarn art style
widget:
- text: >-
Frog, yarn art style
output:
url: https://replicate.delivery/yhqm/WKTZ1ZnQRYZ4H9nYCfNwL34ZfGeTLkg7iBemxmIAeoXDhwldC/out-0.webp
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3.5-Large DreamBooth LoRA - lucataco/SD3.5-Large-yarn-2
<Gallery />
## Model description
These are lucataco/SD3.5-Large-yarn-2 DreamBooth LoRA weights for stable-diffusion-3.5-large.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? True.
## Trigger words
You should use `Frog, yarn art style` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](lucataco/SD3.5-Large-yarn-2/tree/main) in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stable-diffusion-3.5-large, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lucataco/SD3.5-Large-yarn-2', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('Frog, yarn art style').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here πΎ](/lucataco/SD3.5-Large-yarn-2/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md).
## Training details
Trained on Replicate using: [lucataco/stable-diffusion-3.5-large-lora-trainer](https://replicate.com/lucataco/stable-diffusion-3.5-large-lora-trainer)
## Notes
This is an attempt at the diffusers example in trainining the [text_encoder](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md#text-encoder-training)
|
JasonBounre/with_cnn_summary_train_1
|
JasonBounre
| 2024-11-01T19:26:05Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2024-10-30T11:59:36Z |
---
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
license: llama3
tags:
- generated_from_trainer
model-index:
- name: with_cnn_summary_train_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# with_cnn_summary_train_1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 1.3255 | 0.9991 | 579 | 1.2416 |
| 1.2371 | 2.0 | 1159 | 1.2072 |
| 1.1535 | 2.9991 | 1738 | 1.2081 |
| 1.0356 | 4.0 | 2318 | 1.2215 |
| 0.9171 | 4.9991 | 2897 | 1.2679 |
| 0.7819 | 6.0 | 3477 | 1.3105 |
| 0.6835 | 6.9991 | 4056 | 1.3958 |
| 0.6058 | 8.0 | 4636 | 1.4633 |
| 0.4779 | 8.9991 | 5215 | 1.5415 |
| 0.3952 | 10.0 | 5795 | 1.6919 |
| 0.3454 | 10.9991 | 6374 | 1.8169 |
| 0.2743 | 12.0 | 6954 | 1.9270 |
| 0.2336 | 12.9991 | 7533 | 1.9968 |
| 0.2069 | 14.0 | 8113 | 2.1474 |
| 0.165 | 14.9991 | 8692 | 2.2376 |
| 0.1495 | 16.0 | 9272 | 2.3137 |
| 0.132 | 16.9991 | 9851 | 2.3877 |
| 0.125 | 18.0 | 10431 | 2.4865 |
| 0.1132 | 18.9991 | 11010 | 2.5050 |
| 0.114 | 19.9827 | 11580 | 2.4987 |
### Framework versions
- PEFT 0.13.1
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
graphitesin/aiml-test-model
|
graphitesin
| 2024-11-01T19:22:59Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-01T19:19:00Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: aiml-test-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aiml-test-model
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8478 | 1.0 | 19 | 0.3871 |
| 0.2655 | 2.0 | 38 | 0.3219 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
joaomiranda27/Tinyllama-pt-text-sql2
|
joaomiranda27
| 2024-11-01T19:21:51Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-01T19:20:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sap-ai-research/BERT-base-uncased-SCD-ACL2022
|
sap-ai-research
| 2024-11-01T19:19:31Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-15T23:30:55Z |
---
license: apache-2.0
---
|
Michaelj1/finetune-smolLM2-360M
|
Michaelj1
| 2024-11-01T19:18:41Z | 202 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T19:18:03Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-360M
tags:
- generated_from_trainer
model-index:
- name: finetune-smolLM2-360M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-smolLM2-360M
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7017 | 0.0871 | 50 | 2.8763 |
| 2.6474 | 0.1743 | 100 | 2.8244 |
| 2.5094 | 0.2614 | 150 | 2.8044 |
| 2.6284 | 0.3486 | 200 | 2.7901 |
| 2.7183 | 0.4357 | 250 | 2.7798 |
| 2.6457 | 0.5229 | 300 | 2.7732 |
| 2.7641 | 0.6100 | 350 | 2.7691 |
| 2.6276 | 0.6972 | 400 | 2.7661 |
| 2.7211 | 0.7843 | 450 | 2.7639 |
| 2.6556 | 0.8715 | 500 | 2.7603 |
| 2.7031 | 0.9586 | 550 | 2.7587 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
mradermacher/Kunpeng-4x7B-mistral-i1-GGUF
|
mradermacher
| 2024-11-01T19:16:08Z | 36 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:mzbac/Kunpeng-4x7B-mistral",
"base_model:quantized:mzbac/Kunpeng-4x7B-mistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T15:17:58Z |
---
base_model: mzbac/Kunpeng-4x7B-mistral
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mzbac/Kunpeng-4x7B-mistral
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kunpeng-4x7B-mistral-i1-GGUF/resolve/main/Kunpeng-4x7B-mistral.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
graphitesin/gita-text-generation-gpt2
|
graphitesin
| 2024-11-01T19:10:38Z | 146 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T19:10:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-12b-deduped-int4-step64-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T19:08:45Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T19:04:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
async0x42/CalmeRys-78B-Orpo-v0.1-exl2_3.5bpw
|
async0x42
| 2024-11-01T18:57:46Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"orpo",
"sft",
"chatml",
"conversational",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:MaziyarPanahi/calme-2.4-rys-78b",
"base_model:quantized:MaziyarPanahi/calme-2.4-rys-78b",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"exl2",
"region:us"
] |
text-generation
| 2024-11-01T18:42:28Z |
---
language:
- en
license: mit
library_name: transformers
tags:
- orpo
- qwen2
- sft
- chatml
base_model:
- MaziyarPanahi/calme-2.4-rys-78b
datasets:
- mlabonne/orpo-dpo-mix-40k
pipeline_tag: text-generation
inference: false
model_creator: dfurman
quantized_by: dfurman
model-index:
- name: CalmeRys-78B-Orpo-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 81.63
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 61.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 37.92
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 20.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 36.37
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.8
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
---
# dfurman/CalmeRys-78B-Orpo-v0.1
This model is a finetune of `MaziyarPanahi/calme-2.4-rys-78b` on 1.5k rows of the `mlabonne/orpo-dpo-mix-40k` dataset. It was trained as a generalist language model for a variety of text generation use cases, including support of agentic capabilities, roleplaying, reasoning, multi-turn conversations, long context coherence, and more.
As of Oct 2024, this is the top ranking model on the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) π.
Thanks go out to [mlabonne](https://huggingface.co/mlabonne), [MaziyarPanahi](https://huggingface.com/MaziyarPanahi), et al. for the source dataset and base model.
## π¦Ύ Training
You can find the experiment on W&B at this [link](https://wandb.ai/dryanfurman/huggingface/runs/1w50nu70?nw=nwuserdryanfurman). Here are a few visualizations:



## π» Usage
<details>
<summary>Setup</summary>
```python
!pip install -qU transformers accelerate bitsandbytes
!huggingface-cli download dfurman/CalmeRys-78B-Orpo-v0.1
```
```python
from transformers import AutoTokenizer, BitsAndBytesConfig
import transformers
import torch
if torch.cuda.get_device_capability()[0] >= 8:
!pip install -qqq flash-attn
attn_implementation = "flash_attention_2"
torch_dtype = torch.bfloat16
else:
attn_implementation = "eager"
torch_dtype = torch.float16
# # quantize if necessary
# bnb_config = BitsAndBytesConfig(
# load_in_4bit=True,
# bnb_4bit_quant_type="nf4",
# bnb_4bit_compute_dtype=torch_dtype,
# bnb_4bit_use_double_quant=True,
# )
model = "dfurman/CalmeRys-78B-Orpo-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={
"torch_dtype": torch_dtype,
# "quantization_config": bnb_config,
"device_map": "auto",
"attn_implementation": attn_implementation,
}
)
```
</details>
### Example 1
```python
question = "Is the number 9.11 larger than 9.9?"
messages = [
{"role": "system", "content": "You are a helpful assistant that thinks step by step."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(
prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95
)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt) :])
```
```
***Generation:
To compare these two numbers, it's important to look at their decimal places after the whole number part, which is 9 in both cases. Comparing the tenths place, 9.11 has a '1' and 9.9 has a '9'. Since '9' is greater than '1', 9.9 is larger than 9.11.
```
### Example 2
```python
question = """The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning.
They sold 93 loaves in the morning and 39 loaves in the afternoon.
A grocery store then returned 6 unsold loaves back to the bakery.
How many loaves of bread did the bakery have left?
Respond as succinctly as possible. Format the response as a completion of this table:
|step|subquestion|procedure|result|
|:---|:----------|:--------|:-----:|"""
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt):])
```
```
***Generation:
|1|Calculate total sold|Add morning and afternoon sales|132|
|2|Subtract sold from total|200 - 132|68|
|3|Adjust for returns|Add returned loaves to remaining|74|
```
### Example 3
```python
question = "What's a good recipe for a spicy margarita?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt):])
```
```
***Generation:
To make a Spicy Margarita, you'll need to incorporate a chili or pepper element into your classic margarita recipe. Hereβs a simple way to do it:
### Ingredients:
- 2 oz tequila (blanco or reposado)
- 1 oz fresh lime juice
- 1/2 oz triple sec (Cointreau or Grand Marnier)
- 1/2 oz agave syrup or simple syrup
- 1-2 slices of jalapeΓ±o (or more depending on how spicy you like it)
- Salt and/or chili powder for rimming the glass
- Ice
- Lime wheel for garnish
### Instructions:
1. **Muddle JalapeΓ±o**: In a shaker, muddle the jalapeΓ±o slices slightly. This will release the oils and heat from the peppers.
2. **Add Remaining Ingredients**: Add the tequila, lime juice, triple sec, and agave syrup or simple syrup.
3. **Shake and Strain**: Fill the shaker with ice and shake vigorously until cold. Strain into a salt and/or chili powder rimmed glass filled with ice.
4. **Garnish and Serve**: Garnish with a lime wheel and enjoy.
If you prefer a smoother spiciness that doesn't overpower the drink, you could also consider making a jalapeΓ±o-infused tequila by leaving the jalapeΓ±o slices in the bottle of tequila for several hours to a couple of days, adjusting the time based on desired level of spiciness. Then use this infused tequila instead of regular tequila in the recipe above.
Another variation is to use a spicy syrup. To make this, combine equal parts water and sugar with a few sliced jalapeΓ±os in a saucepan. Bring to a boil, stirring occasionally to dissolve the sugar. Reduce heat and simmer for about 5 minutes. Let cool, strain out the jalapeΓ±os, then store in a sealed container in the refrigerator until ready to use. Use this spicy syrup instead of regular syrup in the recipe.
As always, adjust the quantity of jalapeΓ±o or the type of chili used to suit your taste. Enjoy responsibly!
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__CalmeRys-78B-Orpo-v0.1)
| Metric |Value|
|-------------------|----:|
|Avg. |50.78|
|IFEval (0-Shot) |81.63|
|BBH (3-Shot) |61.92|
|MATH Lvl 5 (4-Shot)|37.92|
|GPQA (0-shot) |20.02|
|MuSR (0-shot) |36.37|
|MMLU-PRO (5-shot) |66.80|
|
creatorchain/Somo
|
creatorchain
| 2024-11-01T18:57:15Z | 5 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2024-11-01T18:55:55Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/somo_46.png
- text: '-'
output:
url: images/somo_57.png
- text: '-'
output:
url: images/somo_70.png
- text: '-'
output:
url: images/somo_71.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: somo meme
license: mit
---
# Somo
<Gallery />
## Trigger words
You should use `somo meme` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/creatorchain/Somo/tree/main) them in the Files & versions tab.
|
MaziyarPanahi/SmolLM2-1.7B-GGUF
|
MaziyarPanahi
| 2024-11-01T18:54:13Z | 44 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:HuggingFaceTB/SmolLM2-1.7B",
"base_model:quantized:HuggingFaceTB/SmolLM2-1.7B",
"region:us"
] |
text-generation
| 2024-11-01T18:49:13Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: SmolLM2-1.7B-GGUF
base_model: HuggingFaceTB/SmolLM2-1.7B
inference: false
model_creator: HuggingFaceTB
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/SmolLM2-1.7B-GGUF](https://huggingface.co/MaziyarPanahi/SmolLM2-1.7B-GGUF)
- Model creator: [HuggingFaceTB](https://huggingface.co/HuggingFaceTB)
- Original model: [HuggingFaceTB/SmolLM2-1.7B](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B)
## Description
[MaziyarPanahi/SmolLM2-1.7B-GGUF](https://huggingface.co/MaziyarPanahi/SmolLM2-1.7B-GGUF) contains GGUF format model files for [HuggingFaceTB/SmolLM2-1.7B](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
glif-loradex-trainer/x_bulbul_x_Playstation_2_Game_Covers
|
glif-loradex-trainer
| 2024-11-01T18:53:26Z | 23 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2024-11-01T18:52:23Z |
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730487033278__000003000_0.jpg
text: sims 2 game, ps2 cover
- output:
url: samples/1730487057892__000003000_1.jpg
text: tanks game, ps2 cover
- output:
url: samples/1730487082507__000003000_2.jpg
text: hitman, ps2 cover
- output:
url: samples/1730487107119__000003000_3.jpg
text: cars race, ps2 cover
- output:
url: samples/1730487132642__000003000_4.jpg
text: clothing store, ps2 cover
base_model: black-forest-labs/FLUX.1-dev
trigger: ps2 cover
instance_prompt: ps2 cover
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Playstation_2_Game_Covers
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `x_bulbul_x`.
<Gallery />
## Trigger words
You should use `ps2 cover` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/x_bulbul_x_Playstation_2_Game_Covers/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
MaziyarPanahi/Llama-3.2-3B-Instruct-HateXplain-GGUF
|
MaziyarPanahi
| 2024-11-01T18:43:51Z | 48 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Samsoup/Llama-3.2-3B-Instruct-HateXplain",
"base_model:quantized:Samsoup/Llama-3.2-3B-Instruct-HateXplain",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T18:33:32Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.2-3B-Instruct-HateXplain-GGUF
base_model: Samsoup/Llama-3.2-3B-Instruct-HateXplain
inference: false
model_creator: Samsoup
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.2-3B-Instruct-HateXplain-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-HateXplain-GGUF)
- Model creator: [Samsoup](https://huggingface.co/Samsoup)
- Original model: [Samsoup/Llama-3.2-3B-Instruct-HateXplain](https://huggingface.co/Samsoup/Llama-3.2-3B-Instruct-HateXplain)
## Description
[MaziyarPanahi/Llama-3.2-3B-Instruct-HateXplain-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-HateXplain-GGUF) contains GGUF format model files for [Samsoup/Llama-3.2-3B-Instruct-HateXplain](https://huggingface.co/Samsoup/Llama-3.2-3B-Instruct-HateXplain).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
creatorchain/Miggles
|
creatorchain
| 2024-11-01T18:42:25Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2024-11-01T18:41:05Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/miggles_80.jpg
- text: '-'
output:
url: images/miggles_81.jpg
- text: '-'
output:
url: images/miggles_82.jpg
- text: '-'
output:
url: images/miggles_83.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: miggles meme
license: mit
---
# Miggles
<Gallery />
## Trigger words
You should use `miggles meme` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/creatorchain/Miggles/tree/main) them in the Files & versions tab.
|
uzairk7886/finetuned-crypto-model
|
uzairk7886
| 2024-11-01T18:37:57Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-01T18:37:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kangsive/llama3.1-8b-fashion-chat
|
kangsive
| 2024-11-01T18:31:14Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-29T00:11:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-1.4b-deduped-int3-step2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T18:24:36Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T18:22:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
easwar03/t5-small-legal-summarizer
|
easwar03
| 2024-11-01T18:10:59Z | 132 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-01T18:02:44Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-legal-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-legal-summarizer
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9930
- Rouge1: 22.9243
- Rouge2: 7.1417
- Rougel: 18.8502
- Rougelsum: 19.6924
- Gen Len: 17.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 89 | 3.0995 | 23.1688 | 7.6038 | 19.0864 | 20.241 | 18.1778 |
| No log | 2.0 | 178 | 3.0162 | 23.35 | 7.1787 | 19.2791 | 20.0032 | 17.6222 |
| No log | 3.0 | 267 | 2.9930 | 22.9243 | 7.1417 | 18.8502 | 19.6924 | 17.5222 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-1.4b-deduped-int4-step1-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T18:09:59Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T18:09:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF
|
mradermacher
| 2024-11-01T18:09:09Z | 77 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:BarryFutureman/WestLakeX-7B-EvoMerge",
"base_model:quantized:BarryFutureman/WestLakeX-7B-EvoMerge",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T14:49:11Z |
---
base_model: BarryFutureman/WestLakeX-7B-EvoMerge
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BarryFutureman/WestLakeX-7B-EvoMerge
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WestLakeX-7B-EvoMerge-i1-GGUF/resolve/main/WestLakeX-7B-EvoMerge.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MrGohlke/ID_CTI_Llama8b_v3.1
|
MrGohlke
| 2024-11-01T18:09:00Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T18:00:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-12b-deduped-int3-step64-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T18:08:51Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T18:06:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tejasw1/bge-base-case-law-v1
|
Tejasw1
| 2024-11-01T18:01:51Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:16465",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-01T18:01:39Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:16465
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: '**1. Key Legal Issues and Holdings:**
* **Postal Ballot and Election Rules:** The case revolves around the interpretation
of the Conduct of Elections Rules, 1961, specifically Rule 27, which deals with
the receipt of postal ballots.
* **Section 59 of the Representation Act:** The main legal issue is the application
of Section 59 of the Representation of the People Act, 1951, which deals with
the manner of voting in elections.
* **Proper Counting of Votes:** The court considered the issue of proper counting
of votes, including the placement of counting agents and the presence of police
officials in the counting hall.
**2. Significant Facts of the Case:**
* The election was held on 20-5-1991, and the date of counting was initially set
for 26-5-1991, but was later postponed to 16-6-1991 due to the assassination of
Shri Rajiv Gandhi.
* The election petitioner, Shri Ajit Singh, challenged the election on grounds
of irregularities in the counting of votes, including the improper acceptance
of postal ballots.
* The Returning Officer had surrounded the counting hall with high fences and
placed benches in rows for the election agents to sit, which limited their access
to the counting tables.
* The police was present inside the counting hall, and an official video photography
of the counting process was taken.
* The complainant, Narinder Singh, made a complaint about irregularities in the
counting of votes, and six blank ballot papers and three ballots polled in favor
of the petitioner were found to be wrongly counted.
* The Chief Counting Agent, Shri N.S. Jadav, made a complaint about the irregularities,
and the Returning Officer took corrective action.
**3. Court''s Ruling:**
* The Supreme Court upheld the decision of the High Court and dismissed the appeal.
* The court held that the Returning Officer had justification to place police
officials in the counting hall to prevent disturbances.
* The court also held that the placement of benches in rows for the election agents
was necessary to prevent untoward situations developing at the time of counting.
* The court rejected the contentions of the election petitioner regarding the
improper acceptance of postal ballots and the lack of access to the counting tables.
* The court ruled that the postal ballots received after 26-5-1991, but before
the counting of votes fixed by the Election Commission, could not have been rejected.
**4. Citations:**
* **Jitendra Bahadur Singh v. Kirshna Behari**, (1969) 2 SCC 433 : (1970) 1 SCR
852
* **Halsbury''s Laws of England**, 4th Edn., Vol. 15, paras 612 and 616, referred
to'
sentences:
- Can a tenant claim automatic purchase rights if they have not complied with specific
procedural requirements?
- What are the limitations regarding the locus standi of government officials in
challenging compensation awards in land acquisition cases?
- How should irregularities in the counting process, such as the miscounting of
blank or improperly filled ballots, be addressed by the Returning Officer?
- source_sentence: '**1. Key Legal Issues and Holdings:**
* **Capital Gains and Exemption:** The case revolves around the exemption of capital
gains under Section 45 of the Income Tax Act, 1961.
* **Definition of Capital Asset:** The main legal issue is the interpretation
of the term "capital asset" under the Income Tax Act, 1961.
* **Agricultural Land and Capital Assets:** The court considered whether the land
in question was an agricultural land and, therefore, exempt from capital gains.
**2. Significant Facts of the Case:**
* The assessee purchased an extent of 4 acres of land with a hotel building in
1950 for a consideration of Rs 5.53 lakhs.
* The land was registered as urban land in the municipal records and urban land
tax was levied thereon.
* The assessee constructed two large buildings on the land, which were used for
non-residential purposes.
* The land was sold in 1966-67 at the rate of about Rs 260 per sq. yard.
* The assessee was raising bananas and vegetables on the land until the year of
sale.
* The land was situated on Mount Road, Madras, which is the main artery of the
city and its business centre.
**3. Court''s Ruling:**
* The Supreme Court allowed the Revenue''s appeal and set aside the judgment of
the High Court.
* The court held that the land in question was not an agricultural land and, therefore,
not exempt from capital gains.
* The court considered a totality of the relevant facts and circumstances, including
the location, physical characteristics, and use of the land.
* The court held that the mere fact that vegetables were being raised on the land
was a stop-gap activity and did not change the nature and character of the land.
**4. Citations:**
* **Sarifabibi Mohmed Ibrahim v. CIT**, (1993) 204 ITR 631
* **CIT v. V.A. Trivedi**, (1988) 172 ITR 95
* **Gordhanbhai Kahandas Dalwadi v. CIT**, (1981) 127 ITR 664 (Guj)
* **Motibhai D. Patel (Dr) v. CIT**, (1981) 127 ITR 671 (Guj)'
sentences:
- What are the legal implications of terminating an employee on probation, and when
is such a termination considered punitive rather than administrative?
- What factors do courts consider when determining whether land qualifies as agricultural
for the purpose of capital gains exemption?
- In what circumstances can a summary dismissal of an appeal by the High Court affect
the right of an accused to show cause against their conviction?
- source_sentence: '**1. Key Legal Issues and Holdings:**
* **Interpretation of "Income"**: The court considers the meaning of "income"
under the Income Tax Act, 1961, and its implications for taxing income from house
property.
* **Constitutionality of Taxation**: The court examines the constitutionality
of taxing income from house property, particularly under Entry 82 of List I of
the Seventh Schedule to the Constitution.
* **Legislative Power**: The court reviews the legislative power of Parliament
to levy taxes on income, including income from house property.
**2. Significant Facts of the Case:**
* The petitioner, Bhagwan Dass Jain, challenged the constitutionality of taxing
income from house property under Section 23(2) of the Act.
* The petitioner argued that there is no income in the true sense of the term
when the property is used for the assessee''s own residence.
* The respondent, the Union of India, argued that the tax is levied on the presumed
income from the property, rather than the actual income.
* The court considered the contemporaneous law relating to tax on incomes in force
at the time of the Constitution''s enactment.
**3. Court''s Ruling:**
* The court held that the word "income" in Entry 82 of List I of the Seventh Schedule
to the Constitution should be given a wider meaning, encompassing not only monetary
benefits but also presumed income.
* The court ruled that the tax under Section 23(2) of the Act is constitutional
and justified under Entry 82 of List I of the Seventh Schedule to the Constitution.
* The court rejected the petitioner''s contention that taxing income from house
property is unconstitutional.
**4. Citations:**
* **Navinchandra Mafatlal v. CIT**, (1955) 1 SCR 829: 26 ITR 758: AIR 1955 SC
58
* **Resch v. Federal Commissioner of Taxation**, 66 CLR 198-224
* **Governors of the Rotunda Hospital, Dublin v. Coman**, 7 TC 517, 586-587
* **D.M. Vakil v. CIT**, (1946) 14 ITR 298: AIR 1946 Bom 350
* **Sakarlal Balabhai v. ITO**, (1975) 100 ITR 97 (Guj)
* **Yogi Berra v. Secretary of War**, 251 US 253 (1920)
* **United States v. Doremus**, 249 US 86 (1919)'
sentences:
- What are the implications of exclusion provisions in the Customs Tariff for products
that may contain impurities?
- What are the legal implications of a voidable contract in property transactions,
and how might this affect the enforcement of an agreement to sell?
- What constitutional challenges can arise regarding the taxation of income from
house property, and how did the court address these issues in this case?
- source_sentence: '**1. Key Legal Issues and Holdings:**
* **Dues of Government Company:** The main legal issue is whether the dues of
a government company are government dues under Section 537(2) of the Companies
Act, 1956.
* **Attachment of Property:** The court considered whether an attachment of property
by a Revenue Recovery Court creates a charge in the property under Section 125
of the Companies Act, 1956.
* **Applicability of Special Statutes:** The court held that special statutes,
such as the Kerala Revenue Recovery Act, 1968, shall prevail over the Companies
Act, 1956, but only to the extent that they are applicable.
* **Conflict between Statutes:** The court considered the conflict between the
Companies Act, 1956, and the Kerala Revenue Recovery Act, 1968, in relation to
the attachment of property and the creation of a charge.
**2. Significant Facts of the Case:**
* The appellant, a government company, provided a loan to M/s Concert Capital
Limited and its sister concern, M/s Concert Securities Limited.
* The defaulting companies failed to repay the loan, and a recovery proceeding
was initiated against them under the Kerala Revenue Recovery Act, 1968.
* The properties of the defaulting companies were attached, and the appellant
sought leave to proceed with the sale of the properties.
* The High Court rejected the appellant''s application, and the Division Bench
confirmed the decision.
**3. Court''s Ruling:**
* The Supreme Court held that an attachment of property by a Revenue Recovery
Court does not create a charge in the property under Section 125 of the Companies
Act, 1956.
* The court also held that the provisions of the Companies Act, 1956, shall apply
to the recovery proceeding, but only to the extent that they are not inconsistent
with the special statutes, such as the Kerala Revenue Recovery Act, 1968.
* The court dismissed the appeal and upheld the decision of the High Court.
**4. Citations:**
* **International Coach Builders Ltd. v. Karnataka State Financial Corpn.**, (2003)
10 SCC 482
* **Rajasthan State Financial Corpn. v. Official Liquidator**, (2005) 8 SCC 190
* **ICICI Bank Ltd. v. SIDCO Leathers Ltd.**, (2006) 10 SCC 452
* **Sardar Govindrao Mahadik v. Devi Sahai**, (1982) 1 SCC 237
* **Ovation International (India) (P) Ltd., Re**, (1969) 39 Comp Cas 595 (Bom)'
sentences:
- What are the implications of Hindu Law on joint family property and the rights
of family members in cases of property sale?
- What legal implications arise from the attachment of properties by a Revenue Recovery
Court concerning the creation of charges under the Companies Act?
- What are the requirements for a valid gift under Indian law, particularly in relation
to acceptance and possession?
- source_sentence: '**1. Key Legal Issues and Holdings:**
* **Construction of a Will:** The main legal issue is the interpretation of the
will left by Kothandarama Ayyar, a Hindu inhabitant of the district of Tanjore,
to determine the disposition of his properties.
* **Adoption and Inheritance:** The case revolves around the application of the
will''s provisions regarding adoption and inheritance, particularly with regards
to the properties in dispute.
* **Construction of Specific Provisions:** The court considered the construction
of specific provisions in the will, including Paras 5, 13, and other relevant
paragraphs.
**2. Significant Facts of the Case:**
* The testator, Kothandarama Ayyar, died on 25-4-1905, leaving behind his widow,
Parbati, and two daughters, Nagammal and Gnanambal.
* The testator executed his last will on 13-3-1905, giving his widow authority
to adopt a son of Gnanambal or a nephew''s son of the testator.
* The will provides for the distribution of the testator''s properties among his
family members and charities.
* The dispute revolves around the properties in Kothangudi and Injigudi, which
are mentioned in Paras 5 and 13 of the will.
**3. Court''s Ruling:**
* The Supreme Court upheld the construction of the will by the High Court, which
held that Para 5 of the will was not operative in the present case.
* The court rejected the argument that Para 5 was meant to be operative only if
Gnanambal''s son was adopted by the widow.
* The court held that the testator''s main desire was that his widow should adopt
the son of his daughter Gnanambal, and that the provisions made for the two daughters,
the widow, and the adoptive mother were meant to be applicable under all three
contingencies referred to in the will.
* The court allowed the appeal, setting aside the judgment and decree of the High
Court, and restored the judgment and decree of the Subordinate Judge.
**4. Citations:**
* **Venkata Narasimha Appa Row v. Parthasarathy Appa Row**, Privy Council
* **Edwards, In re, Jones v. Jones**, Romer, L.J.
* **Venkata Narasimha Appa Row v. Parthasarathy Appa Row**, (1913-14) 41 IA 51
* **Jones v. Jones**, (1906) 1 Ch 570 (CA)'
sentences:
- What legal standards govern the determination of seniority between direct recruits
and promotees in law enforcement agencies in India?
- How does the U.P. Urban Buildings (Regulation of Letting, Rent & Eviction) Act,
1972 determine the applicability of rent control laws to newly constructed buildings?
- In cases involving wills, how do courts balance the testator's intentions with
the rights of surviving family members?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.01730103806228374
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5271049596309112
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5547866205305652
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.734717416378316
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.01730103806228374
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1757016532103037
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11095732410611302
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0734717416378316
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.01730103806228374
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5271049596309112
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5547866205305652
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.734717416378316
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.352689074380117
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.23119313084711088
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.239821435624779
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.01384083044982699
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5224913494809689
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5501730103806228
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7277970011534025
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.01384083044982699
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.17416378316032297
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11003460207612456
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07277970011534025
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.01384083044982699
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5224913494809689
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5501730103806228
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7277970011534025
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3494776306062529
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2289238571245499
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2378257173312991
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("Tejasw1/bge-base-case-law-v1")
# Run inference
sentences = [
"**1. Key Legal Issues and Holdings:**\n\n* **Construction of a Will:** The main legal issue is the interpretation of the will left by Kothandarama Ayyar, a Hindu inhabitant of the district of Tanjore, to determine the disposition of his properties.\n* **Adoption and Inheritance:** The case revolves around the application of the will's provisions regarding adoption and inheritance, particularly with regards to the properties in dispute.\n* **Construction of Specific Provisions:** The court considered the construction of specific provisions in the will, including Paras 5, 13, and other relevant paragraphs.\n\n**2. Significant Facts of the Case:**\n\n* The testator, Kothandarama Ayyar, died on 25-4-1905, leaving behind his widow, Parbati, and two daughters, Nagammal and Gnanambal.\n* The testator executed his last will on 13-3-1905, giving his widow authority to adopt a son of Gnanambal or a nephew's son of the testator.\n* The will provides for the distribution of the testator's properties among his family members and charities.\n* The dispute revolves around the properties in Kothangudi and Injigudi, which are mentioned in Paras 5 and 13 of the will.\n\n**3. Court's Ruling:**\n\n* The Supreme Court upheld the construction of the will by the High Court, which held that Para 5 of the will was not operative in the present case.\n* The court rejected the argument that Para 5 was meant to be operative only if Gnanambal's son was adopted by the widow.\n* The court held that the testator's main desire was that his widow should adopt the son of his daughter Gnanambal, and that the provisions made for the two daughters, the widow, and the adoptive mother were meant to be applicable under all three contingencies referred to in the will.\n* The court allowed the appeal, setting aside the judgment and decree of the High Court, and restored the judgment and decree of the Subordinate Judge.\n\n**4. Citations:**\n\n* **Venkata Narasimha Appa Row v. Parthasarathy Appa Row**, Privy Council\n* **Edwards, In re, Jones v. Jones**, Romer, L.J.\n* **Venkata Narasimha Appa Row v. Parthasarathy Appa Row**, (1913-14) 41 IA 51\n* **Jones v. Jones**, (1906) 1 Ch 570 (CA)",
"In cases involving wills, how do courts balance the testator's intentions with the rights of surviving family members?",
'How does the U.P. Urban Buildings (Regulation of Letting, Rent & Eviction) Act, 1972 determine the applicability of rent control laws to newly constructed buildings?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0173 |
| cosine_accuracy@3 | 0.5271 |
| cosine_accuracy@5 | 0.5548 |
| cosine_accuracy@10 | 0.7347 |
| cosine_precision@1 | 0.0173 |
| cosine_precision@3 | 0.1757 |
| cosine_precision@5 | 0.111 |
| cosine_precision@10 | 0.0735 |
| cosine_recall@1 | 0.0173 |
| cosine_recall@3 | 0.5271 |
| cosine_recall@5 | 0.5548 |
| cosine_recall@10 | 0.7347 |
| cosine_ndcg@10 | 0.3527 |
| cosine_mrr@10 | 0.2312 |
| **cosine_map@100** | **0.2398** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0138 |
| cosine_accuracy@3 | 0.5225 |
| cosine_accuracy@5 | 0.5502 |
| cosine_accuracy@10 | 0.7278 |
| cosine_precision@1 | 0.0138 |
| cosine_precision@3 | 0.1742 |
| cosine_precision@5 | 0.11 |
| cosine_precision@10 | 0.0728 |
| cosine_recall@1 | 0.0138 |
| cosine_recall@3 | 0.5225 |
| cosine_recall@5 | 0.5502 |
| cosine_recall@10 | 0.7278 |
| cosine_ndcg@10 | 0.3495 |
| cosine_mrr@10 | 0.2289 |
| **cosine_map@100** | **0.2378** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 16,465 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 26.38 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 333 tokens</li><li>mean: 490.59 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What factors do courts consider when evaluating the reliability of eyewitness testimonies in murder trials?</code> | <code>**1. Key Legal Issues and Holdings:**<br><br>* **Culpable Homicide:** The court considered the application of Section 302 of the Indian Penal Code, 1860, which deals with punishment for culpable homicide not amounting to murder.<br>* **Section 302 IPC:** The court upheld the conviction of the accused under Section 302 IPC for the death of Ishwardeen.<br>* **Section 34 IPC:** The court also upheld the conviction of the accused under Section 34 IPC for the death of Ishwardeen, considering the common intention of the accused.<br><br>**2. Significant Facts of the Case:**<br><br>* The deceased, Ishwardeen, was killed in a alleged incident involving the accused, Bhagwan Das, Sheo Mohan, and Shanker @ Sheo Shanker.<br>* The incident occurred on August 18, 1983, at the house of Bhagwan Das, where Ishwardeen and his family were residing.<br>* The accused had allegedly demanded rent from Ishwardeen and had threatened to evict him from the house.<br>* Ishwardeen was killed by the accused, who allegedly gave him 8-9 knife blows.<br>* The case was registered under Section 307 IPC, but it was later converted to Section 302 IPC after Ishwardeen's death.<br><br>**3. Court's Ruling:**<br><br>* The High Court upheld the conviction of Bhagwan Das, Sheo Mohan, and Shanker @ Sheo Shanker under Section 302 IPC and Section 34 IPC.<br>* The court rejected the defense argument that the testimony of the eyewitnesses, Kamla Devi and Subhash, could not be relied upon.<br>* The court relied on the testimony of the eyewitnesses, which was corroborated by the medical evidence.<br>* The court also relied on the post-mortem report, which showed that the antemortem injuries were possible to be caused by a knife at around 9:00 p.m. on August 18, 1983.<br>* The court held that the accused had a common intention to commit the murder of Ishwardeen, and therefore, the conviction under Section 302 IPC with the aid of Section 34 IPC was upheld.<br>* The court also rejected the defense argument that the accused had no motive to commit the murder, and held that the presence of semi-digested food in the stomach of the deceased did not contradict the prosecution's case.<br><br>**4. Citations:**<br><br>* **Dalip Singh v. State of Punjab**<br>* **Section 302 IPC**<br>* **Section 34 IPC**<br>* **Thaman Kumar v. State of Union Territory of Chandigarh**<br>* **State of H.P. v. Jeet Singh**<br>* **Appa Bhat v. State of Gujarat**<br>* **Krishna Mochi v. State of Bihar**<br>* **Israr v. State of U.P.**<br>* **Gali Venkataiah v. State of A.P.**<br>* **Masalti v. State of U.P.**<br>* **Vadivelu Thevar v. State of Madras**<br>* **Galivenkataiah v. State of A.P.**</code> |
| <code>What principles guide the court's decisions on wage fixation in cases involving government undertakings?</code> | <code>**1. Key Legal Issues and Holdings:**<br><br>* **Wage Structure:** The main legal issue is whether the wage structure of a government undertaking in the public sector should be different from that of an undertaking in the private sector.<br>* **Section 10(1)(d) of the Industrial Disputes Act, 1947:** The court considered the applicability of this provision in the context of wage fixation.<br>* **Article 39 and 43 of the Directive Principles of State Policy:** The court examined the constitutional implications of making a distinction between laborers in the public and private sectors.<br>* **Region-cum-Industry Principle:** The court upheld the principle of region-cum-industry in wage fixation.<br>* **Gratuity Scheme:** The court considered the validity of the gratuity scheme introduced by the Tribunal, which provided for payment of gratuity to workmen upon retirement or termination of service.<br>* **Fair Wages:** The court examined the issue of fair wages and whether the Tribunal had violated the accepted norms in fixing the wage structure and dearness allowance.<br>* **Retirement Age:** The court held that the retirement age of the employees should be raised to 60 years, and the discretion to continue employees beyond 58 years should be removed.<br><br>**2. Significant Facts of the Case:**<br><br>* The case involves a government undertaking, Hindustan Antibiotics Limited, and its workmen.<br>* The company is a limited one, but it is financed entirely by the Central Government.<br>* The workmen presented a charter of fifteen demands to the company, including demands related to wage fixation.<br>* The Industrial Tribunal, Bombay, made an award dated October 8, 1963, which was challenged by the company and the workmen.<br>* The company argued that the wage structure in the public sector should be different from that in the private sector.<br>* The workmen argued that the character of the employer is irrelevant in wage fixation.<br><br>**3. Court's Ruling:**<br><br>* The court upheld the principle of region-cum-industry in wage fixation.<br>* The court rejected the argument that the character of the employer is relevant in determining wage structures.<br>* The court held that the distinction between public and private sectors is not justified in wage fixation.<br>* The court also noted that the constitutional directive principles of equal pay for equal work and a living wage for all workers should be followed.<br>* The court validated the gratuity scheme introduced by the Tribunal.<br>* The court rejected the contention that the Tribunal had violated the accepted norms in fixing the wage structure and dearness allowance.<br>* The court raised the retirement age of employees to 60 years and removed the discretion to continue employees beyond 58 years.<br>* The court dismissed the appeals by the company and the workmen, except for modifying the award regarding the age of retirement.<br><br>**4. Citations:**<br><br>* **Crown Aluminium Works v. Workmen**<br>* **Express Newspapers (Private) Ltd. v. Union of India**<br>* **French Motor Car Co. Ltd. v. Workmen**<br>* **Hindustan Times Ltd., New Delhi v. Workmen**<br>* Other relevant cases and reports cited in the judgment.</code> |
| <code>- What role does the recommendation of a State Government play in the land exemption process under the Punjab Towns Improvement Act, 1922?</code> | <code>**1. Key Legal Issues and Holdings:**<br><br>* **Acquisition of Land:** The main legal issue is the acquisition of land by the Improvement Trust under the Punjab Towns Improvement Act, 1922.<br>* **Exemption of Orchards:** The court considered whether the Improvement Trust was bound to exempt orchards from acquisition under Section 56 of the Act.<br>* **Article 14 of the Constitution:** The court held that the Improvement Trust did not violate Article 14 of the Constitution by exempting some orchards while acquiring others.<br>* **Quasi-Judicial Capacity:** The court held that the Improvement Trust acts in a quasi-judicial capacity when determining claims under Section 56 of the Act, but is not bound to give reasons for its decisions.<br>* **Locus Standi:** The court observed that the appellants had no locus standi to invoke Section 56 of the Act, as the acquisition of their land had not been discovered to be unnecessary for the execution of the scheme.<br>* **Power to Exempt Lands:** The court held that the Improvement Trust did not possess the power to exempt lands from the scheme under Section 56 of the Act.<br><br>**2. Significant Facts of the Case:**<br><br>* The Improvement Trust framed a development scheme in 1961 under Section 24 read with Section 28(2) of the Act.<br>* The scheme covered an area of approximately 128 acres, and the Trust acquired the land, including the appellants' land, in 1964.<br>* The appellants applied to the State Government for exempting their land from acquisition on the ground that it contained a fully developed orchard.<br>* The State Government recommended their case to the Chairman of the Improvement Trust, but the Trust refused to exempt their land.<br>* The appellants claimed that the Trust had exempted similar orchards of other persons, and that this was a violation of Article 14 of the Constitution.<br><br>**3. Court's Ruling:**<br><br>* The High Court initially allowed the appellants' writ petition, directing the Trust to allow them a full opportunity of hearing regarding their case for exemption.<br>* The Trust refused to exempt their land, and the appellants appealed to the High Court.<br>* The High Court dismissed the appeal, holding that the Trust had given reasons for its decision and that the appellants had not shown how their land was unnecessary for the execution of the scheme.<br>* The Supreme Court dismissed the appeal, holding that the Improvement Trust did not violate Article 14 of the Constitution by exempting some orchards while acquiring others.<br><br>**4. Citations:**<br><br>* **Punjab Towns Improvement Act, 1922**<br>* **Article 14 of the Constitution of India**<br>* **Section 56 of the Punjab Towns Improvement Act, 1922**<br>* **Section 24 read with Section 28(2) of the Punjab Towns Improvement Act, 1922**<br>* **Section 43 of the Punjab Towns Improvement Act, 1922**</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_512_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|
| 0.0777 | 10 | 1.58 | - | - |
| 0.1553 | 20 | 1.0799 | - | - |
| 0.2330 | 30 | 0.6653 | - | - |
| 0.3107 | 40 | 0.4524 | - | - |
| 0.3883 | 50 | 0.3962 | - | - |
| 0.4660 | 60 | 0.3472 | - | - |
| 0.5437 | 70 | 0.3481 | - | - |
| 0.6214 | 80 | 0.3034 | - | - |
| 0.6990 | 90 | 0.3612 | - | - |
| 0.7767 | 100 | 0.2497 | - | - |
| 0.8544 | 110 | 0.2424 | - | - |
| 0.9320 | 120 | 0.3037 | - | - |
| **0.9942** | **128** | **-** | **0.2359** | **0.2435** |
| 1.0097 | 130 | 0.2795 | - | - |
| 1.0874 | 140 | 0.2519 | - | - |
| 1.1650 | 150 | 0.2414 | - | - |
| 1.2427 | 160 | 0.1837 | - | - |
| 1.3204 | 170 | 0.1734 | - | - |
| 1.3981 | 180 | 0.1462 | - | - |
| 1.4757 | 190 | 0.1593 | - | - |
| 1.5534 | 200 | 0.1648 | - | - |
| 1.6311 | 210 | 0.1593 | - | - |
| 1.7087 | 220 | 0.1737 | - | - |
| 1.7864 | 230 | 0.1237 | - | - |
| 1.8641 | 240 | 0.1205 | - | - |
| 1.9417 | 250 | 0.1611 | - | - |
| 1.9961 | 257 | - | 0.2376 | 0.2424 |
| 2.0194 | 260 | 0.1674 | - | - |
| 2.0971 | 270 | 0.135 | - | - |
| 2.1748 | 280 | 0.1464 | - | - |
| 2.2524 | 290 | 0.1119 | - | - |
| 2.3301 | 300 | 0.089 | - | - |
| 2.4078 | 310 | 0.0774 | - | - |
| 2.4854 | 320 | 0.1039 | - | - |
| 2.5631 | 330 | 0.1218 | - | - |
| 2.6408 | 340 | 0.1001 | - | - |
| 2.7184 | 350 | 0.1072 | - | - |
| 2.7961 | 360 | 0.0774 | - | - |
| 2.8738 | 370 | 0.0855 | - | - |
| 2.9515 | 380 | 0.1096 | - | - |
| 2.9981 | 386 | - | 0.2402 | 0.2381 |
| 3.0291 | 390 | 0.1076 | - | - |
| 3.1068 | 400 | 0.1019 | - | - |
| 3.1845 | 410 | 0.1139 | - | - |
| 3.2621 | 420 | 0.0732 | - | - |
| 3.3398 | 430 | 0.0831 | - | - |
| 3.4175 | 440 | 0.0613 | - | - |
| 3.4951 | 450 | 0.092 | - | - |
| 3.5728 | 460 | 0.0891 | - | - |
| 3.6505 | 470 | 0.0896 | - | - |
| 3.7282 | 480 | 0.0861 | - | - |
| 3.8058 | 490 | 0.0743 | - | - |
| 3.8835 | 500 | 0.077 | - | - |
| 3.9612 | 510 | 0.1056 | - | - |
| 3.9767 | 512 | - | 0.2393 | 0.2393 |
| 0.0777 | 10 | 0.3691 | - | - |
| 0.1553 | 20 | 0.3126 | - | - |
| 0.2330 | 30 | 0.279 | - | - |
| 0.3107 | 40 | 0.2477 | - | - |
| 0.3883 | 50 | 0.2436 | - | - |
| 0.4660 | 60 | 0.2307 | - | - |
| 0.5437 | 70 | 0.2487 | - | - |
| 0.6214 | 80 | 0.2463 | - | - |
| 0.6990 | 90 | 0.2965 | - | - |
| 0.7767 | 100 | 0.2101 | - | - |
| 0.8544 | 110 | 0.1999 | - | - |
| 0.9320 | 120 | 0.2561 | - | - |
| **0.9942** | **128** | **-** | **0.2399** | **0.242** |
| 1.0097 | 130 | 0.2504 | - | - |
| 1.0874 | 140 | 0.246 | - | - |
| 1.1650 | 150 | 0.2043 | - | - |
| 1.2427 | 160 | 0.171 | - | - |
| 1.3204 | 170 | 0.1499 | - | - |
| 1.3981 | 180 | 0.1402 | - | - |
| 1.4757 | 190 | 0.1379 | - | - |
| 1.5534 | 200 | 0.156 | - | - |
| 1.6311 | 210 | 0.1669 | - | - |
| 1.7087 | 220 | 0.1578 | - | - |
| 1.7864 | 230 | 0.1157 | - | - |
| 1.8641 | 240 | 0.1279 | - | - |
| 1.9417 | 250 | 0.1766 | - | - |
| 1.9961 | 257 | - | 0.2386 | 0.2410 |
| 2.0194 | 260 | 0.1693 | - | - |
| 2.0971 | 270 | 0.1424 | - | - |
| 2.1748 | 280 | 0.1517 | - | - |
| 2.2524 | 290 | 0.1151 | - | - |
| 2.3301 | 300 | 0.0974 | - | - |
| 2.4078 | 310 | 0.083 | - | - |
| 2.4854 | 320 | 0.1021 | - | - |
| 2.5631 | 330 | 0.1305 | - | - |
| 2.6408 | 340 | 0.1102 | - | - |
| 2.7184 | 350 | 0.1118 | - | - |
| 2.7961 | 360 | 0.089 | - | - |
| 2.8738 | 370 | 0.1111 | - | - |
| 2.9515 | 380 | 0.145 | - | - |
| 2.9981 | 386 | - | 0.2372 | 0.2400 |
| 3.0291 | 390 | 0.1115 | - | - |
| 3.1068 | 400 | 0.1036 | - | - |
| 3.1845 | 410 | 0.1164 | - | - |
| 3.2621 | 420 | 0.0728 | - | - |
| 3.3398 | 430 | 0.0879 | - | - |
| 3.4175 | 440 | 0.0657 | - | - |
| 3.4951 | 450 | 0.0825 | - | - |
| 3.5728 | 460 | 0.0986 | - | - |
| 3.6505 | 470 | 0.1074 | - | - |
| 3.7282 | 480 | 0.0923 | - | - |
| 3.8058 | 490 | 0.078 | - | - |
| 3.8835 | 500 | 0.0962 | - | - |
| 3.9612 | 510 | 0.1078 | - | - |
| 3.9767 | 512 | - | 0.2378 | 0.2398 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
MatthewFrank/bert-large-uncased_pytorch_5k_V01
|
MatthewFrank
| 2024-11-01T18:00:00Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-01T17:58:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alwaly/face_poofing_detection
|
Alwaly
| 2024-11-01T17:58:44Z | 223 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-29T16:17:12Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: face_poofing_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# face_poofing_detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6273
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 6.3243 | 0.9846 | 48 | 5.6154 | 0.8919 |
| 4.4794 | 1.9897 | 97 | 4.3516 | 0.9202 |
| 3.8293 | 2.9949 | 146 | 3.6687 | 0.9730 |
| 3.2121 | 4.0 | 195 | 3.1092 | 0.9820 |
| 2.733 | 4.9846 | 243 | 2.6919 | 0.9743 |
| 2.3114 | 5.9897 | 292 | 2.2633 | 0.9923 |
| 1.9962 | 6.9949 | 341 | 1.9594 | 0.9923 |
| 1.7789 | 8.0 | 390 | 1.7641 | 0.9897 |
| 1.6642 | 8.9846 | 438 | 1.6506 | 0.9910 |
| 1.6005 | 9.8462 | 480 | 1.6273 | 0.9871 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-1.4b-deduped-int3-step1-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T17:58:04Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T17:57:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oma7777/llama3.18B-Fine-tunedByOmar4BITMERGDextrarefine
|
oma7777
| 2024-11-01T17:57:53Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-01T17:54:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fbolanos/LRO_BigBird_Final_Augmented
|
fbolanos
| 2024-11-01T17:56:47Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"big_bird",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-01T17:56:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaRyAm1295/Llama-3.2-3B-KAM
|
MaRyAm1295
| 2024-11-01T17:56:40Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"trl",
"sft",
"Llama",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-01T13:20:07Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
license: llama3.2
tags:
- trl
- sft
- Llama
- generated_from_trainer
model-index:
- name: Llama-3.2-3B-KAM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KAM-Llama3.2-3B
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
#### Step Training Loss
- 50 2.436700
- 100 2.103400
- 150 2.048900
- 200 2.041700
- 250 2.002900
- 300 1.991700
- 350 1.977400
- 400 1.974500
- 450 1.945000
- 500 1.951100
- 550 1.950700
- 600 1.943000
- 650 1.927900
- 700 1.920900
- 750 1.903400
- 800 1.896000
- 850 1.910800
- 900 1.904600
- 950 1.918100
- 1000 1.911500
- 1050 1.909100
- 1100 1.928900
- 1150 1.896100
- 1200 1.876700
### Framework versions
- PEFT 0.13.2
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Yntec/Aurantium
|
Yntec
| 2024-11-01T17:54:01Z | 323 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Girls",
"Style",
"Disney",
"Art",
"Ikena",
"PromptSharingSamaritan",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"base_model:Yntec/WesternCartoon",
"base_model:merge:Yntec/WesternCartoon",
"base_model:digiplay/Sudachi_diffusers",
"base_model:merge:digiplay/Sudachi_diffusers",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-11-01T03:22:10Z |
---
license: other
tags:
- Anime
- Girls
- Style
- Disney
- Art
- Ikena
- PromptSharingSamaritan
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
base_model:
- digiplay/Sudachi_diffusers
- Yntec/WesternCartoon
base_model_relation: merge
---
# Aurantium
Sudachi with WesternCartoon's compositions and flat style, samples and prompts (all use seed 9119):

A realistic painting of a CUTE CHIBI playing videogames, potted plants, ponytail and classical antiquities by ROSSDRAWS. 4k by raphael and alberto vargas, cinematic lighting, highly detailed and intricate

masterpiece, best quality, 1girl, 90s anime blue eyes, long hair, white hair, tree, stairs, standing, kimono, sky, cherry blossoms, temple, looking at viewer, upper body, from below, looking back, chibi

pretty Tiny mischievous CUTE girl wearing a puffy teal jacket, DETAILED EYES, greatly face, Magazine Ad, playing, lush market overgrown city, smooth, intricate, elegant, digital painting, artstation, concept art, sharp focus, illustration, art by sam spratt and ROSSDRAWS, valorant character

masterpiece, best quality, 1girl, retro 80s anime black eyes, medium hair, stairs, cherry blossoms, temple, fox girl, detached sleeves, animal ears, happy, arms behind back, Chibi
Original pages:
https://civitai.com/models/62060/western-cartoon-type-a
https://civitai.com/models/85909/sudachi
# Recipe:
- SuperMerger Weight sum Use MBW 1,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1
Model A:
sudachi
Model B:
western-cartoon-type-a
Output Model:
Aurantium
|
MaziyarPanahi/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF
|
MaziyarPanahi
| 2024-11-01T17:45:53Z | 112 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"base_model:quantized:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T17:22:27Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF
base_model: ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
inference: false
model_creator: ZeroXClem
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF)
- Model creator: [ZeroXClem](https://huggingface.co/ZeroXClem)
- Original model: [ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B](https://huggingface.co/ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B)
## Description
[MaziyarPanahi/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF) contains GGUF format model files for [ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B](https://huggingface.co/ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF
|
mradermacher
| 2024-11-01T17:39:09Z | 114 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Downtown-Case/Qwen2.5-32B-EVA-Instruct-Merge-0.1",
"base_model:quantized:Downtown-Case/Qwen2.5-32B-EVA-Instruct-Merge-0.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T06:54:29Z |
---
base_model: Downtown-Case/Qwen2.5-32B-EVA-Instruct-Merge-0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Downtown-Case/Qwen2.5-32B-EVA-Instruct-Merge-0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32B-EVA-Instruct-Merge-0.1-i1-GGUF/resolve/main/Qwen2.5-32B-EVA-Instruct-Merge-0.1.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
GonzaloMG/marigold-e2e-ft-normals
|
GonzaloMG
| 2024-11-01T17:35:40Z | 497 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"normals",
"monocular normals estimation",
"in-the-wild",
"zero-shot",
"single-step",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-13T13:03:09Z |
---
license: apache-2.0
tags:
- normals
- monocular normals estimation
- in-the-wild
- zero-shot
- single-step
---
|
ibm-research/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-FREESOLV-101
|
ibm-research
| 2024-11-01T17:35:26Z | 12 | 1 |
SmallMoleculeMultiView
|
[
"SmallMoleculeMultiView",
"safetensors",
"binding-affinity-prediction",
"bio-medical",
"chemistry",
"drug-discovery",
"drug-target-interaction",
"model_hub_mixin",
"molecular-property-prediction",
"moleculenet",
"molecules",
"multi-view",
"multimodal",
"pytorch_model_hub_mixin",
"small-molecules",
"virtual-screening",
"arxiv:2410.19704",
"base_model:ibm-research/biomed.sm.mv-te-84m",
"base_model:finetune:ibm-research/biomed.sm.mv-te-84m",
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T23:58:19Z |
---
base_model: ibm/biomed.sm.mv-te-84m
library_name: SmallMoleculeMultiView
license: apache-2.0
tags:
- binding-affinity-prediction
- bio-medical
- chemistry
- drug-discovery
- drug-target-interaction
- model_hub_mixin
- molecular-property-prediction
- moleculenet
- molecules
- multi-view
- multimodal
- pytorch_model_hub_mixin
- small-molecules
- virtual-screening
---
# ibm/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-FREESOLV-101
`biomed.sm.mv-te-84m` is a multimodal biomedical foundation model for small molecules created using **MMELON** (**M**ulti-view **M**olecular **E**mbedding with **L**ate Fusi**on**), a flexible approach to aggregate multiple views (sequence, image, graph) of molecules in a foundation model setting. While models based on single view representation typically performs well on some downstream tasks and not others, the multi-view model performs robustly across a wide range of property prediction tasks encompassing ligand-protein binding, molecular solubility, metabolism and toxicity. It has been applied to screen compounds against a large (> 100 targets) set of G Protein-Coupled receptors (GPCRs) to identify strong binders for 33 targets related to Alzheimerβs disease, which are validated through structure-based modeling and identification of key binding motifs [Multi-view biomedical foundation models for molecule-target and property prediction](https://arxiv.org/abs/2410.19704).
- **Developers:** IBM Research
- **GitHub Repository:** [https://github.com/BiomedSciAI/biomed-multi-view](https://github.com/BiomedSciAI/biomed-multi-view)
- **Paper:** [Multi-view biomedical foundation models for molecule-target and property prediction](https://arxiv.org/abs/2410.19704)
- **Release Date**: Oct 28th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Description
Source code for the model and finetuning is made available in [this repository](https://github.com/BiomedSciAI/biomed-multi-view).

* Image Representation: Captures the 2D visual depiction of molecular structures, highlighting features like symmetry, bond angles, and functional groups. Molecular images are generated using RDKit and undergo data augmentation during training to enhance robustness.
* Graph Representation: Encodes molecules as undirected graphs where nodes represent atoms and edges represent bonds. Atom-specific properties (e.g., atomic number, chirality) and bond-specific properties (e.g., bond type, stereochemistry) are embedded using categorical embedding techniques.
* Text Representation: Utilizes SMILES strings to represent chemical structures, tokenized with a custom tokenizer. The sequences are embedded using a transformer-based architecture to capture the sequential nature of the chemical information.
The embeddings from these single-view pre-trained encoders are combined using an attention-based aggregator module. This module learns to weight each view appropriately, producing a unified multi-view embedding. This approach leverages the strengths of each representation to improve performance on downstream predictive tasks.
## Intended Use and Limitations
The model is intended for (1) Molecular property prediction. The pre-trained model may be fine-tuned for both regression and classification tasks. Examples include but are not limited to binding affinity, solubility and toxicity. (2) Pre-trained model embeddings may be used as the basis for similarity measures to search a chemical library. (3) Small molecule embeddings provided by the model may be combined with protein embeddings to fine-tune on tasks that utilize both small molecule and protein representation. (4) Select task-specific fine-tuned models are given as examples. Through listed activities, model may aid in aspects of the molecular discovery such as lead finding or optimization.
The modelβs domain of applicability is small, drug-like molecules. It is intended for use with molecules less than 1000 Da molecular weight. The MMELON approach itself may be extended to include proteins and other macromolecules but does not at present provide embeddings for such entities. The model is at present not intended for molecular generation. Molecules must be given as a valid SMILES string that represents a valid chemically bonded graph. Invalid inputs will impact performance or lead to error.
## Usage
Using `SmallMoleculeMultiView` API requires the codebase [https://github.com/BiomedSciAI/biomed-multi-view](https://github.com/BiomedSciAI/biomed-multi-view)
## Installation
Follow these steps to set up the `biomed-multi-view` codebase on your system.
### Prerequisites
* Operating System: Linux or macOS
* Python Version: Python 3.11
* Conda: Anaconda or Miniconda installed
* Git: Version control to clone the repository
### Step 1: Set up the project directory
Choose a root directory where you want to install `biomed-multi-view`. For example:
```bash
export ROOT_DIR=~/biomed-multiview
mkdir -p $ROOT_DIR
```
#### Step 2: Create and activate a Conda environment
```bash
conda create -y python=3.11 --prefix $ROOT_DIR/envs/biomed-multiview
```
Activate the environment:
```bash
conda activate $ROOT_DIR/envs/biomed-multiview
```
#### Step 3: Clone the repository
Navigate to the project directory and clone the repository:
```bash
mkdir -p $ROOT_DIR/code
cd $ROOT_DIR/code
# Clone the repository using HTTPS
git clone https://github.com/BiomedSciAI/biomed-multi-view.git
# Navigate into the cloned repository
cd biomed-multi-view
```
Note: If you prefer using SSH, ensure that your SSH keys are set up with GitHub and use the following command:
```bash
git clone git@github.com:BiomedSciAI/biomed-multi-view.git
```
#### Step 4: Install package dependencies
Install the package in editable mode along with development dependencies:
``` bash
pip install -e .['dev']
```
Install additional requirements:
``` bash
pip install -r requirements.txt
```
#### Step 5: macOS-Specific instructions (Apple Silicon)
If you are using a Mac with Apple Silicon (M1/M2/M3) and the zsh shell, you may need to disable globbing for the installation command:
``` bash
noglob pip install -e .[dev]
```
Install macOS-specific requirements optimized for Appleβs Metal Performance Shaders (MPS):
```bash
pip install -r requirements-mps.txt
```
#### Step 6: Installation verification (optional)
Verify that the installation was successful by running unit tests
```bash
python -m unittest bmfm_sm.tests.all_tests
```
### Get embedding example
You can generate embeddings for a given molecule using the pretrained model with the following code.
```python
# Necessary imports
from bmfm_sm.api.smmv_api import SmallMoleculeMultiViewModel
from bmfm_sm.core.data_modules.namespace import LateFusionStrategy
# Load Model
model = SmallMoleculeMultiViewModel.from_pretrained(
LateFusionStrategy.ATTENTIONAL,
model_path="ibm/biomed.sm.mv-te-84m",
huggingface=True
)
# Load Model and get embeddings for a molecule
example_smiles = "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O"
example_emb = SmallMoleculeMultiViewModel.get_embeddings(
smiles=example_smiles,
model_path="ibm/biomed.sm.mv-te-84m",
huggingface=True,
)
print(example_emb.shape)
```
### Get prediction example
You can use the finetuned models to make predictions on new data.
``` python
from bmfm_sm.api.smmv_api import SmallMoleculeMultiViewModel
from bmfm_sm.api.dataset_registry import DatasetRegistry
# Initialize the dataset registry
dataset_registry = DatasetRegistry()
# Example SMILES string
example_smiles = "CC(C)C1CCC(C)CC1O"
# Get dataset information for dataset
ds = dataset_registry.get_dataset_info("FREESOLV")
# Load the finetuned model for the dataset
finetuned_model_ds = SmallMoleculeMultiViewModel.from_finetuned(
ds,
model_path="ibm/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-FREESOLV-101",
inference_mode=True,
huggingface=True
)
# Get predictions
prediction = SmallMoleculeMultiViewModel.get_predictions(
example_smiles, ds, finetuned_model=finetuned_model_ds
)
print("Prediction:", prediction)
```
For more advanced usage, see our detailed examples at: https://github.com/BiomedSciAI/biomed-multi-view
## Citation
If you found our work useful, please consider giving a star to the repo and cite our paper:
```
@misc{suryanarayanan2024multiviewbiomedicalfoundationmodels,
title={Multi-view biomedical foundation models for molecule-target and property prediction},
author={Parthasarathy Suryanarayanan and Yunguang Qiu and Shreyans Sethi and Diwakar Mahajan and Hongyang Li and Yuxin Yang and Elif Eyigoz and Aldo Guzman Saenz and Daniel E. Platt and Timothy H. Rumbell and Kenney Ng and Sanjoy Dey and Myson Burch and Bum Chul Kwon and Pablo Meyer and Feixiong Cheng and Jianying Hu and Joseph A. Morrone},
year={2024},
eprint={2410.19704},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2410.19704},
}
```
|
ibm-research/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-LIPOPHILICITY-101
|
ibm-research
| 2024-11-01T17:34:53Z | 11 | 1 |
SmallMoleculeMultiView
|
[
"SmallMoleculeMultiView",
"safetensors",
"binding-affinity-prediction",
"bio-medical",
"chemistry",
"drug-discovery",
"drug-target-interaction",
"model_hub_mixin",
"molecular-property-prediction",
"moleculenet",
"molecules",
"multi-view",
"multimodal",
"pytorch_model_hub_mixin",
"small-molecules",
"virtual-screening",
"arxiv:2410.19704",
"base_model:ibm-research/biomed.sm.mv-te-84m",
"base_model:finetune:ibm-research/biomed.sm.mv-te-84m",
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T23:26:04Z |
---
base_model: ibm/biomed.sm.mv-te-84m
library_name: SmallMoleculeMultiView
license: apache-2.0
tags:
- binding-affinity-prediction
- bio-medical
- chemistry
- drug-discovery
- drug-target-interaction
- model_hub_mixin
- molecular-property-prediction
- moleculenet
- molecules
- multi-view
- multimodal
- pytorch_model_hub_mixin
- small-molecules
- virtual-screening
---
# ibm/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-LIPOPHILICITY-101
`biomed.sm.mv-te-84m` is a multimodal biomedical foundation model for small molecules created using **MMELON** (**M**ulti-view **M**olecular **E**mbedding with **L**ate Fusi**on**), a flexible approach to aggregate multiple views (sequence, image, graph) of molecules in a foundation model setting. While models based on single view representation typically performs well on some downstream tasks and not others, the multi-view model performs robustly across a wide range of property prediction tasks encompassing ligand-protein binding, molecular solubility, metabolism and toxicity. It has been applied to screen compounds against a large (> 100 targets) set of G Protein-Coupled receptors (GPCRs) to identify strong binders for 33 targets related to Alzheimerβs disease, which are validated through structure-based modeling and identification of key binding motifs [Multi-view biomedical foundation models for molecule-target and property prediction](https://arxiv.org/abs/2410.19704).
- **Developers:** IBM Research
- **GitHub Repository:** [https://github.com/BiomedSciAI/biomed-multi-view](https://github.com/BiomedSciAI/biomed-multi-view)
- **Paper:** [Multi-view biomedical foundation models for molecule-target and property prediction](https://arxiv.org/abs/2410.19704)
- **Release Date**: Oct 28th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Description
Source code for the model and finetuning is made available in [this repository](https://github.com/BiomedSciAI/biomed-multi-view).

* Image Representation: Captures the 2D visual depiction of molecular structures, highlighting features like symmetry, bond angles, and functional groups. Molecular images are generated using RDKit and undergo data augmentation during training to enhance robustness.
* Graph Representation: Encodes molecules as undirected graphs where nodes represent atoms and edges represent bonds. Atom-specific properties (e.g., atomic number, chirality) and bond-specific properties (e.g., bond type, stereochemistry) are embedded using categorical embedding techniques.
* Text Representation: Utilizes SMILES strings to represent chemical structures, tokenized with a custom tokenizer. The sequences are embedded using a transformer-based architecture to capture the sequential nature of the chemical information.
The embeddings from these single-view pre-trained encoders are combined using an attention-based aggregator module. This module learns to weight each view appropriately, producing a unified multi-view embedding. This approach leverages the strengths of each representation to improve performance on downstream predictive tasks.
## Intended Use and Limitations
The model is intended for (1) Molecular property prediction. The pre-trained model may be fine-tuned for both regression and classification tasks. Examples include but are not limited to binding affinity, solubility and toxicity. (2) Pre-trained model embeddings may be used as the basis for similarity measures to search a chemical library. (3) Small molecule embeddings provided by the model may be combined with protein embeddings to fine-tune on tasks that utilize both small molecule and protein representation. (4) Select task-specific fine-tuned models are given as examples. Through listed activities, model may aid in aspects of the molecular discovery such as lead finding or optimization.
The modelβs domain of applicability is small, drug-like molecules. It is intended for use with molecules less than 1000 Da molecular weight. The MMELON approach itself may be extended to include proteins and other macromolecules but does not at present provide embeddings for such entities. The model is at present not intended for molecular generation. Molecules must be given as a valid SMILES string that represents a valid chemically bonded graph. Invalid inputs will impact performance or lead to error.
## Usage
Using `SmallMoleculeMultiView` API requires the codebase [https://github.com/BiomedSciAI/biomed-multi-view](https://github.com/BiomedSciAI/biomed-multi-view)
## Installation
Follow these steps to set up the `biomed-multi-view` codebase on your system.
### Prerequisites
* Operating System: Linux or macOS
* Python Version: Python 3.11
* Conda: Anaconda or Miniconda installed
* Git: Version control to clone the repository
### Step 1: Set up the project directory
Choose a root directory where you want to install `biomed-multi-view`. For example:
```bash
export ROOT_DIR=~/biomed-multiview
mkdir -p $ROOT_DIR
```
#### Step 2: Create and activate a Conda environment
```bash
conda create -y python=3.11 --prefix $ROOT_DIR/envs/biomed-multiview
```
Activate the environment:
```bash
conda activate $ROOT_DIR/envs/biomed-multiview
```
#### Step 3: Clone the repository
Navigate to the project directory and clone the repository:
```bash
mkdir -p $ROOT_DIR/code
cd $ROOT_DIR/code
# Clone the repository using HTTPS
git clone https://github.com/BiomedSciAI/biomed-multi-view.git
# Navigate into the cloned repository
cd biomed-multi-view
```
Note: If you prefer using SSH, ensure that your SSH keys are set up with GitHub and use the following command:
```bash
git clone git@github.com:BiomedSciAI/biomed-multi-view.git
```
#### Step 4: Install package dependencies
Install the package in editable mode along with development dependencies:
``` bash
pip install -e .['dev']
```
Install additional requirements:
``` bash
pip install -r requirements.txt
```
#### Step 5: macOS-Specific instructions (Apple Silicon)
If you are using a Mac with Apple Silicon (M1/M2/M3) and the zsh shell, you may need to disable globbing for the installation command:
``` bash
noglob pip install -e .[dev]
```
Install macOS-specific requirements optimized for Appleβs Metal Performance Shaders (MPS):
```bash
pip install -r requirements-mps.txt
```
#### Step 6: Installation verification (optional)
Verify that the installation was successful by running unit tests
```bash
python -m unittest bmfm_sm.tests.all_tests
```
### Get embedding example
You can generate embeddings for a given molecule using the pretrained model with the following code.
```python
# Necessary imports
from bmfm_sm.api.smmv_api import SmallMoleculeMultiViewModel
from bmfm_sm.core.data_modules.namespace import LateFusionStrategy
# Load Model
model = SmallMoleculeMultiViewModel.from_pretrained(
LateFusionStrategy.ATTENTIONAL,
model_path="ibm/biomed.sm.mv-te-84m",
huggingface=True
)
# Load Model and get embeddings for a molecule
example_smiles = "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O"
example_emb = SmallMoleculeMultiViewModel.get_embeddings(
smiles=example_smiles,
model_path="ibm/biomed.sm.mv-te-84m",
huggingface=True,
)
print(example_emb.shape)
```
### Get prediction example
You can use the finetuned models to make predictions on new data.
``` python
from bmfm_sm.api.smmv_api import SmallMoleculeMultiViewModel
from bmfm_sm.api.dataset_registry import DatasetRegistry
# Initialize the dataset registry
dataset_registry = DatasetRegistry()
# Example SMILES string
example_smiles = "CC(C)C1CCC(C)CC1O"
# Get dataset information for dataset
ds = dataset_registry.get_dataset_info("LIPOPHILICITY")
# Load the finetuned model for the dataset
finetuned_model_ds = SmallMoleculeMultiViewModel.from_finetuned(
ds,
model_path="ibm/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-LIPOPHILICITY-101",
inference_mode=True,
huggingface=True
)
# Get predictions
prediction = SmallMoleculeMultiViewModel.get_predictions(
example_smiles, ds, finetuned_model=finetuned_model_ds
)
print("Prediction:", prediction)
```
For more advanced usage, see our detailed examples at: https://github.com/BiomedSciAI/biomed-multi-view
## Citation
If you found our work useful, please consider giving a star to the repo and cite our paper:
```
@misc{suryanarayanan2024multiviewbiomedicalfoundationmodels,
title={Multi-view biomedical foundation models for molecule-target and property prediction},
author={Parthasarathy Suryanarayanan and Yunguang Qiu and Shreyans Sethi and Diwakar Mahajan and Hongyang Li and Yuxin Yang and Elif Eyigoz and Aldo Guzman Saenz and Daniel E. Platt and Timothy H. Rumbell and Kenney Ng and Sanjoy Dey and Myson Burch and Bum Chul Kwon and Pablo Meyer and Feixiong Cheng and Jianying Hu and Joseph A. Morrone},
year={2024},
eprint={2410.19704},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2410.19704},
}
```
|
ibm-research/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-SIDER-101
|
ibm-research
| 2024-11-01T17:34:42Z | 13 | 2 |
SmallMoleculeMultiView
|
[
"SmallMoleculeMultiView",
"safetensors",
"binding-affinity-prediction",
"bio-medical",
"chemistry",
"drug-discovery",
"drug-target-interaction",
"model_hub_mixin",
"molecular-property-prediction",
"moleculenet",
"molecules",
"multi-view",
"multimodal",
"pytorch_model_hub_mixin",
"small-molecules",
"virtual-screening",
"arxiv:2410.19704",
"base_model:ibm-research/biomed.sm.mv-te-84m",
"base_model:finetune:ibm-research/biomed.sm.mv-te-84m",
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T23:13:11Z |
---
base_model: ibm/biomed.sm.mv-te-84m
library_name: SmallMoleculeMultiView
license: apache-2.0
tags:
- binding-affinity-prediction
- bio-medical
- chemistry
- drug-discovery
- drug-target-interaction
- model_hub_mixin
- molecular-property-prediction
- moleculenet
- molecules
- multi-view
- multimodal
- pytorch_model_hub_mixin
- small-molecules
- virtual-screening
---
# ibm/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-SIDER-101
`biomed.sm.mv-te-84m` is a multimodal biomedical foundation model for small molecules created using **MMELON** (**M**ulti-view **M**olecular **E**mbedding with **L**ate Fusi**on**), a flexible approach to aggregate multiple views (sequence, image, graph) of molecules in a foundation model setting. While models based on single view representation typically performs well on some downstream tasks and not others, the multi-view model performs robustly across a wide range of property prediction tasks encompassing ligand-protein binding, molecular solubility, metabolism and toxicity. It has been applied to screen compounds against a large (> 100 targets) set of G Protein-Coupled receptors (GPCRs) to identify strong binders for 33 targets related to Alzheimerβs disease, which are validated through structure-based modeling and identification of key binding motifs [Multi-view biomedical foundation models for molecule-target and property prediction](https://arxiv.org/abs/2410.19704).
- **Developers:** IBM Research
- **GitHub Repository:** [https://github.com/BiomedSciAI/biomed-multi-view](https://github.com/BiomedSciAI/biomed-multi-view)
- **Paper:** [Multi-view biomedical foundation models for molecule-target and property prediction](https://arxiv.org/abs/2410.19704)
- **Release Date**: Oct 28th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Description
Source code for the model and finetuning is made available in [this repository](https://github.com/BiomedSciAI/biomed-multi-view).

* Image Representation: Captures the 2D visual depiction of molecular structures, highlighting features like symmetry, bond angles, and functional groups. Molecular images are generated using RDKit and undergo data augmentation during training to enhance robustness.
* Graph Representation: Encodes molecules as undirected graphs where nodes represent atoms and edges represent bonds. Atom-specific properties (e.g., atomic number, chirality) and bond-specific properties (e.g., bond type, stereochemistry) are embedded using categorical embedding techniques.
* Text Representation: Utilizes SMILES strings to represent chemical structures, tokenized with a custom tokenizer. The sequences are embedded using a transformer-based architecture to capture the sequential nature of the chemical information.
The embeddings from these single-view pre-trained encoders are combined using an attention-based aggregator module. This module learns to weight each view appropriately, producing a unified multi-view embedding. This approach leverages the strengths of each representation to improve performance on downstream predictive tasks.
## Intended Use and Limitations
The model is intended for (1) Molecular property prediction. The pre-trained model may be fine-tuned for both regression and classification tasks. Examples include but are not limited to binding affinity, solubility and toxicity. (2) Pre-trained model embeddings may be used as the basis for similarity measures to search a chemical library. (3) Small molecule embeddings provided by the model may be combined with protein embeddings to fine-tune on tasks that utilize both small molecule and protein representation. (4) Select task-specific fine-tuned models are given as examples. Through listed activities, model may aid in aspects of the molecular discovery such as lead finding or optimization.
The modelβs domain of applicability is small, drug-like molecules. It is intended for use with molecules less than 1000 Da molecular weight. The MMELON approach itself may be extended to include proteins and other macromolecules but does not at present provide embeddings for such entities. The model is at present not intended for molecular generation. Molecules must be given as a valid SMILES string that represents a valid chemically bonded graph. Invalid inputs will impact performance or lead to error.
## Usage
Using `SmallMoleculeMultiView` API requires the codebase [https://github.com/BiomedSciAI/biomed-multi-view](https://github.com/BiomedSciAI/biomed-multi-view)
## Installation
Follow these steps to set up the `biomed-multi-view` codebase on your system.
### Prerequisites
* Operating System: Linux or macOS
* Python Version: Python 3.11
* Conda: Anaconda or Miniconda installed
* Git: Version control to clone the repository
### Step 1: Set up the project directory
Choose a root directory where you want to install `biomed-multi-view`. For example:
```bash
export ROOT_DIR=~/biomed-multiview
mkdir -p $ROOT_DIR
```
#### Step 2: Create and activate a Conda environment
```bash
conda create -y python=3.11 --prefix $ROOT_DIR/envs/biomed-multiview
```
Activate the environment:
```bash
conda activate $ROOT_DIR/envs/biomed-multiview
```
#### Step 3: Clone the repository
Navigate to the project directory and clone the repository:
```bash
mkdir -p $ROOT_DIR/code
cd $ROOT_DIR/code
# Clone the repository using HTTPS
git clone https://github.com/BiomedSciAI/biomed-multi-view.git
# Navigate into the cloned repository
cd biomed-multi-view
```
Note: If you prefer using SSH, ensure that your SSH keys are set up with GitHub and use the following command:
```bash
git clone git@github.com:BiomedSciAI/biomed-multi-view.git
```
#### Step 4: Install package dependencies
Install the package in editable mode along with development dependencies:
``` bash
pip install -e .['dev']
```
Install additional requirements:
``` bash
pip install -r requirements.txt
```
#### Step 5: macOS-Specific instructions (Apple Silicon)
If you are using a Mac with Apple Silicon (M1/M2/M3) and the zsh shell, you may need to disable globbing for the installation command:
``` bash
noglob pip install -e .[dev]
```
Install macOS-specific requirements optimized for Appleβs Metal Performance Shaders (MPS):
```bash
pip install -r requirements-mps.txt
```
#### Step 6: Installation verification (optional)
Verify that the installation was successful by running unit tests
```bash
python -m unittest bmfm_sm.tests.all_tests
```
### Get embedding example
You can generate embeddings for a given molecule using the pretrained model with the following code.
```python
# Necessary imports
from bmfm_sm.api.smmv_api import SmallMoleculeMultiViewModel
from bmfm_sm.core.data_modules.namespace import LateFusionStrategy
# Load Model
model = SmallMoleculeMultiViewModel.from_pretrained(
LateFusionStrategy.ATTENTIONAL,
model_path="ibm/biomed.sm.mv-te-84m",
huggingface=True
)
# Load Model and get embeddings for a molecule
example_smiles = "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O"
example_emb = SmallMoleculeMultiViewModel.get_embeddings(
smiles=example_smiles,
model_path="ibm/biomed.sm.mv-te-84m",
huggingface=True,
)
print(example_emb.shape)
```
### Get prediction example
You can use the finetuned models to make predictions on new data.
``` python
from bmfm_sm.api.smmv_api import SmallMoleculeMultiViewModel
from bmfm_sm.api.dataset_registry import DatasetRegistry
# Initialize the dataset registry
dataset_registry = DatasetRegistry()
# Example SMILES string
example_smiles = "CC(C)C1CCC(C)CC1O"
# Get dataset information for dataset
ds = dataset_registry.get_dataset_info("SIDER")
# Load the finetuned model for the dataset
finetuned_model_ds = SmallMoleculeMultiViewModel.from_finetuned(
ds,
model_path="ibm/biomed.sm.mv-te-84m-MoleculeNet-ligand_scaffold-SIDER-101",
inference_mode=True,
huggingface=True
)
# Get predictions
prediction = SmallMoleculeMultiViewModel.get_predictions(
example_smiles, ds, finetuned_model=finetuned_model_ds
)
print("Prediction:", prediction)
```
For more advanced usage, see our detailed examples at: https://github.com/BiomedSciAI/biomed-multi-view
## Citation
If you found our work useful, please consider giving a star to the repo and cite our paper:
```
@misc{suryanarayanan2024multiviewbiomedicalfoundationmodels,
title={Multi-view biomedical foundation models for molecule-target and property prediction},
author={Parthasarathy Suryanarayanan and Yunguang Qiu and Shreyans Sethi and Diwakar Mahajan and Hongyang Li and Yuxin Yang and Elif Eyigoz and Aldo Guzman Saenz and Daniel E. Platt and Timothy H. Rumbell and Kenney Ng and Sanjoy Dey and Myson Burch and Bum Chul Kwon and Pablo Meyer and Feixiong Cheng and Jianying Hu and Joseph A. Morrone},
year={2024},
eprint={2410.19704},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2410.19704},
}
```
|
rewicks/baseline_en-de_64k_ep37
|
rewicks
| 2024-11-01T17:30:22Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-01T17:28:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GonzaloMG/marigold-e2e-ft-depth
|
GonzaloMG
| 2024-11-01T17:29:28Z | 1,233 | 6 |
diffusers
|
[
"diffusers",
"safetensors",
"depth",
"monocular depth estimation",
"in-the-wild",
"zero-shot",
"single-step",
"depth-estimation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
depth-estimation
| 2024-09-13T10:16:15Z |
---
license: apache-2.0
pipeline_tag: depth-estimation
tags:
- depth
- monocular depth estimation
- in-the-wild
- zero-shot
- single-step
---
|
GonzaloMG/stable-diffusion-e2e-ft-depth
|
GonzaloMG
| 2024-11-01T17:27:35Z | 47 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"depth",
"monocular depth estimation",
"in-the-wild",
"zero-shot",
"single-step",
"depth-estimation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
depth-estimation
| 2024-09-13T13:08:48Z |
---
license: apache-2.0
pipeline_tag: depth-estimation
tags:
- depth
- monocular depth estimation
- in-the-wild
- zero-shot
- single-step
---
|
mrTvister/ouad
|
mrTvister
| 2024-11-01T17:21:55Z | 67 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-11-01T17:19:37Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/An animated scene of Donald Trump stands yawnin....png
- text: '-'
output:
url: images/An animated scene Raccon with VR-glasses. A rac....png
- text: '-'
output:
url: images/An animated scene Wolverine and Deadpool in vil....png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: An animated scene
---
# Once Upon a Dog style
<Gallery />
## Trigger words
You should use `An animated scene` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mrTvister/ouad/tree/main) them in the Files & versions tab.
|
soumilj/xlm-roberta-base-finetuned-panx-it
|
soumilj
| 2024-11-01T17:19:44Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-01T17:16:48Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5194
- F1: 0.6203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1453 | 1.0 | 70 | 0.8324 | 0.3221 |
| 0.711 | 2.0 | 140 | 0.5905 | 0.5566 |
| 0.5599 | 3.0 | 210 | 0.5194 | 0.6203 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
soumilj/xlm-roberta-base-finetuned-panx-fr
|
soumilj
| 2024-11-01T17:16:42Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-01T17:12:24Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4223
- F1: 0.7220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0427 | 1.0 | 191 | 0.6015 | 0.5975 |
| 0.5316 | 2.0 | 382 | 0.4603 | 0.6923 |
| 0.4019 | 3.0 | 573 | 0.4223 | 0.7220 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
ahmadmac/whisper-small-urdu
|
ahmadmac
| 2024-11-01T17:09:52Z | 85 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-01T15:49:12Z |
---
library_name: transformers
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Urdu - Muhammad Ahmad
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: ur
split: test
args: ur
metrics:
- name: Wer
type: wer
value: 52.76745626257935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Urdu - Muhammad Ahmad
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5270
- Wer Ortho: 56.1224
- Wer: 52.7675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.2885 | 1.0753 | 500 | 0.5270 | 56.1224 | 52.7675 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
outlookAi/oRCFWjwIL8
|
outlookAi
| 2024-11-01T17:08:55Z | 5 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T16:31:10Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Manon
---
# Orcfwjwil8
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Manon` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/oRCFWjwIL8', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf
|
RichardErkhov
| 2024-11-01T17:06:29Z | 14 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T16:38:35Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen-Qwen1.5-1.8B-1717510013 - GGUF
- Model creator: https://huggingface.co/coderbojack/
- Original model: https://huggingface.co/coderbojack/Qwen-Qwen1.5-1.8B-1717510013/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen-Qwen1.5-1.8B-1717510013.Q2_K.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q2_K.gguf) | Q2_K | 0.79GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q3_K_S.gguf) | Q3_K_S | 0.89GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q3_K.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q3_K.gguf) | Q3_K | 0.95GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q3_K_M.gguf) | Q3_K_M | 0.95GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q3_K_L.gguf) | Q3_K_L | 0.98GB |
| [Qwen-Qwen1.5-1.8B-1717510013.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.IQ4_XS.gguf) | IQ4_XS | 1.01GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q4_0.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q4_0.gguf) | Q4_0 | 1.04GB |
| [Qwen-Qwen1.5-1.8B-1717510013.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.IQ4_NL.gguf) | IQ4_NL | 1.05GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q4_K_S.gguf) | Q4_K_S | 1.08GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q4_K.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q4_K.gguf) | Q4_K | 1.13GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q4_K_M.gguf) | Q4_K_M | 1.13GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q4_1.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q4_1.gguf) | Q4_1 | 1.13GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q5_0.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q5_0.gguf) | Q5_0 | 1.22GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q5_K_S.gguf) | Q5_K_S | 1.24GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q5_K.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q5_K.gguf) | Q5_K | 1.28GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q5_K_M.gguf) | Q5_K_M | 1.28GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q5_1.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q5_1.gguf) | Q5_1 | 1.31GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q6_K.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q6_K.gguf) | Q6_K | 1.47GB |
| [Qwen-Qwen1.5-1.8B-1717510013.Q8_0.gguf](https://huggingface.co/RichardErkhov/coderbojack_-_Qwen-Qwen1.5-1.8B-1717510013-gguf/blob/main/Qwen-Qwen1.5-1.8B-1717510013.Q8_0.gguf) | Q8_0 | 1.82GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf
|
RichardErkhov
| 2024-11-01T17:06:28Z | 22 | 0 | null |
[
"gguf",
"arxiv:2403.03640",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T16:38:35Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Apollo-1.8B - GGUF
- Model creator: https://huggingface.co/FreedomIntelligence/
- Original model: https://huggingface.co/FreedomIntelligence/Apollo-1.8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Apollo-1.8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q2_K.gguf) | Q2_K | 0.78GB |
| [Apollo-1.8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q3_K_S.gguf) | Q3_K_S | 0.89GB |
| [Apollo-1.8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q3_K.gguf) | Q3_K | 0.97GB |
| [Apollo-1.8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q3_K_M.gguf) | Q3_K_M | 0.97GB |
| [Apollo-1.8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q3_K_L.gguf) | Q3_K_L | 1.0GB |
| [Apollo-1.8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.IQ4_XS.gguf) | IQ4_XS | 1.01GB |
| [Apollo-1.8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q4_0.gguf) | Q4_0 | 1.04GB |
| [Apollo-1.8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.IQ4_NL.gguf) | IQ4_NL | 1.05GB |
| [Apollo-1.8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q4_K_S.gguf) | Q4_K_S | 1.08GB |
| [Apollo-1.8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q4_K.gguf) | Q4_K | 1.16GB |
| [Apollo-1.8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q4_K_M.gguf) | Q4_K_M | 1.16GB |
| [Apollo-1.8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q4_1.gguf) | Q4_1 | 1.13GB |
| [Apollo-1.8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q5_0.gguf) | Q5_0 | 1.22GB |
| [Apollo-1.8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q5_K_S.gguf) | Q5_K_S | 1.24GB |
| [Apollo-1.8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q5_K.gguf) | Q5_K | 1.31GB |
| [Apollo-1.8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q5_K_M.gguf) | Q5_K_M | 1.31GB |
| [Apollo-1.8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q5_1.gguf) | Q5_1 | 1.31GB |
| [Apollo-1.8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q6_K.gguf) | Q6_K | 1.47GB |
| [Apollo-1.8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-1.8B-gguf/blob/main/Apollo-1.8B.Q8_0.gguf) | Q8_0 | 1.82GB |
Original model description:
---
license: apache-2.0
---
# Multilingual Medicine: Model, Dataset, Benchmark, Code
Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far
<p align="center">
π¨π»βπ»<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> β’π <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> β’ π <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> β’ π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> β’ π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<br> <a href="./README_zh.md"> δΈζ </a> | <a href="./README.md"> English
</p>

## π Update
* **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released.
* **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is publishedοΌπ
* **[2024.01.23]** Apollo repo is publishedοΌπ
## Results
π€<a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a>
π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> β’ π€ <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a>

## Usage Format
User:{query}\nAssistant:{response}<|endoftext|>
## Dataset & Evaluation
- Dataset
π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a>
<details><summary>Click to expand</summary>

- [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip)
- [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train)
- Pretrain:
- data item:
- json_name: {data_source}_{language}_{data_type}.json
- data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
- language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
- data_type: qa(generated qa from text)
- data_type==text: list of string
```
[
"string1",
"string2",
...
]
```
- data_type==qa: list of qa pairs(list of string)
```
[
[
"q1",
"a1",
"q2",
"a2",
...
],
...
]
```
- SFT:
- json_name: {data_source}_{language}.json
- data_type: code, general, math, medicalExam, medicalPatient
- data item: list of qa pairs(list of string)
```
[
[
"q1",
"a1",
"q2",
"a2",
...
],
...
]
```
</details>
- Evaluation
π€ <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<details><summary>Click to expand</summary>
- EN:
- [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
- [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
- [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- ZH:
- [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
- [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
- Randomly sample 2,000 multiple-choice questions
- ES: [Head_qa](https://huggingface.co/datasets/head_qa)
- FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
- HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
</details>
## Results reproduction
<details><summary>Click to expand</summary>
**Waiting for Update**
</details>
## Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{wang2024apollo,
title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
year={2024},
eprint={2403.03640},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MaziyarPanahi/Llama-3.2-3B-Apex-GGUF
|
MaziyarPanahi
| 2024-11-01T17:05:07Z | 100 | 3 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:bunnycore/Llama-3.2-3B-Apex",
"base_model:quantized:bunnycore/Llama-3.2-3B-Apex",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T16:54:00Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.2-3B-Apex-GGUF
base_model: bunnycore/Llama-3.2-3B-Apex
inference: false
model_creator: bunnycore
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.2-3B-Apex-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Apex-GGUF)
- Model creator: [bunnycore](https://huggingface.co/bunnycore)
- Original model: [bunnycore/Llama-3.2-3B-Apex](https://huggingface.co/bunnycore/Llama-3.2-3B-Apex)
## Description
[MaziyarPanahi/Llama-3.2-3B-Apex-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Apex-GGUF) contains GGUF format model files for [bunnycore/Llama-3.2-3B-Apex](https://huggingface.co/bunnycore/Llama-3.2-3B-Apex).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
bunnycore/LLama-3.2-1B-General-lora_model-F16-GGUF
|
bunnycore
| 2024-11-01T17:04:59Z | 42 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:bunnycore/LLama-3.2-1B-General-lora_model",
"base_model:quantized:bunnycore/LLama-3.2-1B-General-lora_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T17:04:57Z |
---
base_model: bunnycore/LLama-3.2-1B-General-lora_model
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
---
# bunnycore/LLama-3.2-1B-General-lora_model-F16-GGUF
This LoRA adapter was converted to GGUF format from [`bunnycore/LLama-3.2-1B-General-lora_model`](https://huggingface.co/bunnycore/LLama-3.2-1B-General-lora_model) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/bunnycore/LLama-3.2-1B-General-lora_model) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora LLama-3.2-1B-General-lora_model-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora LLama-3.2-1B-General-lora_model-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
kh4dien/gemma-pol-edu
|
kh4dien
| 2024-11-01T17:02:50Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T16:55:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-12b-deduped-int4-step32-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T16:58:09Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T16:56:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf
|
RichardErkhov
| 2024-11-01T16:57:28Z | 7 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T16:32:17Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5_1.5b_4000ocr_600kosmos_spanish - GGUF
- Model creator: https://huggingface.co/abelsr1710/
- Original model: https://huggingface.co/abelsr1710/qwen2.5_1.5b_4000ocr_600kosmos_spanish/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q2_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q2_K.gguf) | Q2_K | 0.63GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K.gguf) | Q3_K | 0.77GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_0.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_0.gguf) | Q4_0 | 0.87GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_K.gguf) | Q4_K | 0.92GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_1.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q4_1.gguf) | Q4_1 | 0.95GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_0.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_0.gguf) | Q5_0 | 1.02GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_K.gguf) | Q5_K | 1.05GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_1.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q5_1.gguf) | Q5_1 | 1.1GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q6_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q6_K.gguf) | Q6_K | 1.19GB |
| [qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q8_0.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos_spanish-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos_spanish.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** abelsr1710
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF
|
mradermacher
| 2024-11-01T16:54:09Z | 32 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"bineric/NorskGPT-Mistral-7b",
"en",
"base_model:birgermoell/Munin-NeuralBeagle-NorskGPT",
"base_model:quantized:birgermoell/Munin-NeuralBeagle-NorskGPT",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T18:15:48Z |
---
base_model: birgermoell/Munin-NeuralBeagle-NorskGPT
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- bineric/NorskGPT-Mistral-7b
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/birgermoell/Munin-NeuralBeagle-NorskGPT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Munin-NeuralBeagle-NorskGPT-GGUF/resolve/main/Munin-NeuralBeagle-NorskGPT.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bcjeong/krx_Meta-Llama-3.1-8B_001
|
bcjeong
| 2024-11-01T16:54:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"krx",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T09:04:39Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhishek/autotrain-smollm2-135m-finetune-guanaco
|
abhishek
| 2024-11-01T16:48:26Z | 141 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:timdettmers/openassistant-guanaco",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T16:31:17Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- timdettmers/openassistant-guanaco
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-0-hook_mlp_out-l1-1e-05
|
Prisma-Multimodal
| 2024-11-01T16:43:32Z | 18 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:43:23Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 98.7
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 598.234
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 0
- Component: hook_mlp_out
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 1e-05
- L0 Sparsity: 598.2344
- Explained Variance: 0.9874 (98.74%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: nqym4pez
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/nqym4pez/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 0, hook_mlp_out, Run ID: nqym4pez}
}
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-0-hook_resid_post-l1-1e-05
|
Prisma-Multimodal
| 2024-11-01T16:43:12Z | 11 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:43:00Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 98.7
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 1090.145
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 0
- Component: hook_resid_post
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 1e-05
- L0 Sparsity: 1090.1449
- Explained Variance: 0.9867 (98.67%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: wwmkx7bq
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/wwmkx7bq/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 0, hook_resid_post, Run ID: wwmkx7bq}
}
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-0-hook_mlp_out-l1-5e-05
|
Prisma-Multimodal
| 2024-11-01T16:43:00Z | 15 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:42:50Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 94.3
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 160.479
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:5e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 0
- Component: hook_mlp_out
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 5e-05
- L0 Sparsity: 160.4793
- Explained Variance: 0.9428 (94.28%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: irrbz56r
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/irrbz56r/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 0, hook_mlp_out, Run ID: irrbz56r}
}
|
RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf
|
RichardErkhov
| 2024-11-01T16:42:59Z | 7 | 0 | null |
[
"gguf",
"arxiv:2311.03099",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T15:07:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Frejya-0.8b - GGUF
- Model creator: https://huggingface.co/MrRobotoAI/
- Original model: https://huggingface.co/MrRobotoAI/Frejya-0.8b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Frejya-0.8b.Q2_K.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q2_K.gguf) | Q2_K | 2.53GB |
| [Frejya-0.8b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Frejya-0.8b.Q3_K.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q3_K.gguf) | Q3_K | 3.28GB |
| [Frejya-0.8b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Frejya-0.8b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Frejya-0.8b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Frejya-0.8b.Q4_0.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Frejya-0.8b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Frejya-0.8b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Frejya-0.8b.Q4_K.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q4_K.gguf) | Q4_K | 4.07GB |
| [Frejya-0.8b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Frejya-0.8b.Q4_1.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Frejya-0.8b.Q5_0.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Frejya-0.8b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Frejya-0.8b.Q5_K.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q5_K.gguf) | Q5_K | 4.78GB |
| [Frejya-0.8b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Frejya-0.8b.Q5_1.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Frejya-0.8b.Q6_K.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q6_K.gguf) | Q6_K | 5.53GB |
| [Frejya-0.8b.Q8_0.gguf](https://huggingface.co/RichardErkhov/MrRobotoAI_-_Frejya-0.8b-gguf/blob/main/Frejya-0.8b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
base_model:
- MrRobotoAI/Loki-v5.2
- OmnicromsBrain/TestmodelC
- vaitech/open-hermes-sd-finetune-erot-story
- MrRobotoAI/Hathor-v4.7
- OmnicromsBrain/Eros_Scribe-7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [OmnicromsBrain/Eros_Scribe-7b](https://huggingface.co/OmnicromsBrain/Eros_Scribe-7b) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/Loki-v5.2](https://huggingface.co/MrRobotoAI/Loki-v5.2)
* [OmnicromsBrain/TestmodelC](https://huggingface.co/OmnicromsBrain/TestmodelC)
* [vaitech/open-hermes-sd-finetune-erot-story](https://huggingface.co/vaitech/open-hermes-sd-finetune-erot-story)
* [MrRobotoAI/Hathor-v4.7](https://huggingface.co/MrRobotoAI/Hathor-v4.7)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/Loki-v5.2
parameters:
weight: 0.2
density: 0.9
- model: vaitech/open-hermes-sd-finetune-erot-story
parameters:
weight: 0.2
density: 0.9
- model: OmnicromsBrain/Eros_Scribe-7b
parameters:
weight: 0.2
density: 0.9
- model: MrRobotoAI/Hathor-v4.7
parameters:
weight: 0.2
density: 0.9
- model: OmnicromsBrain/TestmodelC
parameters:
weight: 0.2
density: 0.9
merge_method: dare_linear
base_model: OmnicromsBrain/Eros_Scribe-7b
parameters:
normalize: true
dtype: float16
```
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-0-hook_mlp_out-l1-8e-05
|
Prisma-Multimodal
| 2024-11-01T16:42:41Z | 70 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:42:30Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 91.0
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 84.883
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:8e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 0
- Component: hook_mlp_out
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 8e-05
- L0 Sparsity: 84.8831
- Explained Variance: 0.9099 (90.99%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: qkn3qemo
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/qkn3qemo/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 0, hook_mlp_out, Run ID: qkn3qemo}
}
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-0-hook_mlp_out-l1-0.0001
|
Prisma-Multimodal
| 2024-11-01T16:42:29Z | 48 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:42:20Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 89.2
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 57.312
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:0.0001


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 0
- Component: hook_mlp_out
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 0.0001
- L0 Sparsity: 57.3120
- Explained Variance: 0.8921 (89.21%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: ldjq1773
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/ldjq1773/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 0, hook_mlp_out, Run ID: ldjq1773}
}
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-1-hook_mlp_out-l1-1e-05
|
Prisma-Multimodal
| 2024-11-01T16:42:10Z | 14 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:42:00Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 98.4
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 1478.889
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 1
- Component: hook_mlp_out
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 1e-05
- L0 Sparsity: 1478.8888
- Explained Variance: 0.9842 (98.42%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: 81te6oo1
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/81te6oo1/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 1, hook_mlp_out, Run ID: 81te6oo1}
}
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-1-hook_resid_post-l1-1e-05
|
Prisma-Multimodal
| 2024-11-01T16:41:59Z | 10 | 0 |
torch
|
[
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-11-01T16:41:50Z |
---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 98.4
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 1509.243
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 1
- Component: hook_resid_post
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 1e-05
- L0 Sparsity: 1509.2427
- Explained Variance: 0.9837 (98.37%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: hcrv0eqc
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/hcrv0eqc/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 1, hook_resid_post, Run ID: hcrv0eqc}
}
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.