2023-12-21 00:27:42 +00:00
# Ollama Autocoder
2023-12-20 09:36:55 +00:00
2023-12-21 00:30:34 +00:00
A simple to use Ollama autocompletion engine with options exposed and streaming functionality
2023-12-21 00:27:42 +00:00
2023-12-23 04:45:27 +00:00
![example ](example.gif )
2023-12-21 00:27:42 +00:00
## Requirements
- Ollama must be serving on the API endpoint applied in settings
- For installation of Ollama, visit [ollama.ai ](https://ollama.ai )
- Ollama must have the model applied in settings installed.
## How to Use
2024-01-06 23:33:17 +00:00
1. In a text document, press space. The option `Autocomplete with Ollama` or a preview of the first line of autocompletion will appear. Press `enter` to start generation.
2023-12-21 06:33:07 +00:00
- Alternatively, you can run the `Autocomplete with Ollama` command from the command pallete (or set a keybind).
2023-12-21 00:27:42 +00:00
2. After startup, the tokens will be streamed to your cursor.
2023-12-27 01:16:44 +00:00
3. To stop the generation early, press the "Cancel" button on the "Ollama Autocoder" notification or type something.
2023-12-21 00:27:42 +00:00
4. Once generation stops, the notification will disappear.
2023-12-21 02:22:39 +00:00
## Notes
- For fastest results, an Nvidia GPU or Apple Silicon is recommended. CPU still works on small models.
- The prompt only sees behind the cursor. The model is unaware of text in front of its position.
2024-01-06 23:33:17 +00:00
- For CPU-only, low end, or battery powered devices, it is highly recommended to disable the `response preview` option, as it automatically triggers the model.