This is the Fabelous autocoder. It is based on the Ollama Autocoder Extension
Go to file
Nathan Hedge ee97715d9b
removed the cursor thing because it was silly, added toggle for ask on space, exposed autocomplete command to command pallete
2023-12-21 00:33:07 -06:00
.github/ISSUE_TEMPLATE Update issue templates 2023-12-20 19:11:51 -06:00
.vscode forgot to start a git history 2023-12-20 03:36:55 -06:00
src removed the cursor thing because it was silly, added toggle for ask on space, exposed autocomplete command to command pallete 2023-12-21 00:33:07 -06:00
.eslintrc.js forgot to start a git history 2023-12-20 03:36:55 -06:00
.gitignore added vsix to gitignore 2023-12-20 18:32:02 -06:00
LICENSE forgot to start a git history 2023-12-20 03:36:55 -06:00
README.md removed the cursor thing because it was silly, added toggle for ask on space, exposed autocomplete command to command pallete 2023-12-21 00:33:07 -06:00
icon.png settings only update on config change, better progress reporting, changed name, added README and icon, added categories & keywords, changed activation event 2023-12-20 18:27:42 -06:00
package.json removed the cursor thing because it was silly, added toggle for ask on space, exposed autocomplete command to command pallete 2023-12-21 00:33:07 -06:00
tsconfig.json forgot to start a git history 2023-12-20 03:36:55 -06:00

README.md

Ollama Autocoder

A simple to use Ollama autocompletion engine with options exposed and streaming functionality

Requirements

  • Ollama must be serving on the API endpoint applied in settings
    • For installation of Ollama, visit ollama.ai
  • Ollama must have the model applied in settings installed.

How to Use

  1. In a text document, press space. The option Autocomplete with Ollama will appear. Press enter to start generation.
    • Alternatively, you can run the Autocomplete with Ollama command from the command pallete (or set a keybind).
  2. After startup, the tokens will be streamed to your cursor.
  3. To stop the generation early, press the "Cancel" button on the "Ollama Autocoder" notification
  4. Once generation stops, the notification will disappear.

Notes

  • For fastest results, an Nvidia GPU or Apple Silicon is recommended. CPU still works on small models.
  • The prompt only sees behind the cursor. The model is unaware of text in front of its position.