--- pipeline_tag: text-generation base_model: ibm-granite/granite-8b-code-instruct-4k inference: false license: apache-2.0 datasets: - bigcode/commitpackft - TIGER-Lab/MathInstruct - meta-math/MetaMathQA - glaiveai/glaive-code-assistant-v3 - glaive-function-calling-v2 - bugdaryan/sql-create-context-instruction - garage-bAInd/Open-Platypus - nvidia/HelpSteer metrics: - code_eval library_name: transformers tags: - code - granite - openvino - openvino-export model-index: - name: granite-8b-code-instruct-4k results: - task: type: text-generation dataset: name: HumanEvalSynthesis(Python) type: bigcode/humanevalpack metrics: - type: pass@1 value: 57.9 name: pass@1 - type: pass@1 value: 52.4 name: pass@1 - type: pass@1 value: 58.5 name: pass@1 - type: pass@1 value: 43.3 name: pass@1 - type: pass@1 value: 48.2 name: pass@1 - type: pass@1 value: 37.2 name: pass@1 - type: pass@1 value: 53.0 name: pass@1 - type: pass@1 value: 42.7 name: pass@1 - type: pass@1 value: 52.4 name: pass@1 - type: pass@1 value: 36.6 name: pass@1 - type: pass@1 value: 43.9 name: pass@1 - type: pass@1 value: 16.5 name: pass@1 - type: pass@1 value: 39.6 name: pass@1 - type: pass@1 value: 40.9 name: pass@1 - type: pass@1 value: 48.2 name: pass@1 - type: pass@1 value: 41.5 name: pass@1 - type: pass@1 value: 39.0 name: pass@1 - type: pass@1 value: 32.9 name: pass@1 --- This model was converted to OpenVINO from [`ibm-granite/granite-8b-code-instruct-4k`](https://huggingface.co/ibm-granite/granite-8b-code-instruct-4k) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. First make sure you have optimum-intel installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForCausalLM model_id = "NitroLLM/granite-8b-code-instruct-4k-openvino" model = OVModelForCausalLM.from_pretrained(model_id) ```