This is the Fabelous autocoder. It is based on the Ollama Autocoder Extension
Go to file
Nathan Hedge 72003d9d52
Merge branch 'main' of https://github.com/10Nates/ollama-autocoder into main
2024-01-27 21:42:44 -06:00
.github/ISSUE_TEMPLATE Added labels to issue templates 2024-01-18 22:36:42 -06:00
.vscode forgot to start a git history 2023-12-20 03:36:55 -06:00
src Undo was terrible. Changed Position to be selection so you could just undo manually instead. 2024-01-27 21:31:40 -06:00
.eslintrc.js forgot to start a git history 2023-12-20 03:36:55 -06:00
.gitignore added vsix to gitignore 2023-12-20 18:32:02 -06:00
LICENSE Fixed license year, updated version 2024-01-09 14:28:17 -06:00
README.md QOL + battery draining problem 2024-01-27 20:22:47 -06:00
example.gif added example gif 2023-12-22 22:45:27 -06:00
icon.png settings only update on config change, better progress reporting, changed name, added README and icon, added categories & keywords, changed activation event 2023-12-20 18:27:42 -06:00
package.json More reliable user input detect + undo 2024-01-27 21:22:14 -06:00
tsconfig.json forgot to start a git history 2023-12-20 03:36:55 -06:00

README.md

Ollama Autocoder

A simple to use Ollama autocompletion engine with options exposed and streaming functionality

example

Requirements

  • Ollama must be serving on the API endpoint applied in settings
    • For installation of Ollama, visit ollama.ai
  • Ollama must have the model applied in settings installed.

How to Use

  1. In a text document, press space (or any character in the completion keys setting). The option Autocomplete with Ollama or a preview of the first line of autocompletion will appear. Press enter to start generation.
    • Alternatively, you can run the Autocomplete with Ollama command from the command pallete (or set a keybind).
  2. After startup, the tokens will be streamed to your cursor.
  3. To stop the generation early, press the "Cancel" button on the "Ollama Autocoder" notification or type something.
  4. Once generation stops, the notification will disappear.

Notes

  • For fastest results, an Nvidia GPU or Apple Silicon is recommended. CPU still works on small models.
  • The prompt only sees behind the cursor. The model is unaware of text in front of its position.
  • For CPU-only, low end, or battery powered devices, it is highly recommended to disable the response preview option, as it automatically triggers the model. This will cause continue inline to be always on. You can also increase the preview delay time.
  • If you don't want inline generation to continue beyond the response preview, change the continue inline option in settings to false. This doesn't apply to the command pallete.