:: rustral ::

rustral: LLM coding assistant for Rust


rustral aims to provide best-in-class coding assistance by:

  1. starting from the best-in-class base models
    • currently: Mistral-7B-v0.1
    • next: Mixtral-8x7B-v0.1
  2. limiting its scope exclusively to Rust code generation
  3. making specific prescriptions on how it should be interacted with
  4. training on a large and diverse a body of Rust code

best-in-class (soon!)
opinionated
permissively licensed

how to run


  1. Download and compile llama.cpp
  2. Download the latest *.gguf.bin file from huggingface.co/0xideas/rustral to the models folder in llama.cpp
  3. prompt with the following format from the llama.cpp root folder:
    ./main
    -m PATH_TO_MODEL
    -p PROMPT
    -s 1 -n 128 -t 2



prompt format


The currently used prompt format is this:

"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Input:
{input}

### Response:
"""

The current {instruction} element is the "docstring" or documentary comment at the top of a function or struct.

The {input} element is up to two code blocks that immediately precede the target code. A block is defined by being separated with two newline characters from the previous and subsequent block.



support us


We currently need:

  • GPU hours
  • Money for (automated) labelling and buying GPU hours
And can always use:
  • Clean, well-structured data
So if you have any of these to share, please get in touch!