added good Readme with explanation and video Demo

This commit is contained in:
Falko Victor Habel 2024-09-10 14:37:07 +02:00
parent e160fee3f9
commit a595468644
4 changed files with 86 additions and 18 deletions

View File

@ -1,25 +1,75 @@
# Fabelous Autocoder
A simple to use Ollama autocompletion engine with options exposed and streaming functionality
Fabelous Autocoder is a Visual Studio Code extension that provides an easy-to-use interface for Ollama autocompletion. This extension allows developers to use Ollama's powerful language models to generate code completions as they type. It is highly customizable, allowing users to configure various settings to fit their needs.
![Fabelous Autocoder in Action](.../.././videos/demo.gif)
## Requirements
- Ollama must be serving on the API endpoint applied in settings
- For installation of Ollama, visit [ollama.ai](https://ollama.ai)
- Ollama must have the model applied in settings installed.
## How to Use
## Features
1. In a text document, press space *(or any character in the `completion keys` setting)*. The option `Fabelous autocompletion` or a preview of the first line of autocompletion will appear. Press `enter` to start generation.
- Alternatively, you can run the `Fabelous autocompletion` command from the command pallete (or set a keybind).
2. After startup, the tokens will be streamed to your cursor.
3. To stop the generation early, press the "Cancel" button on the "Ollama Autocoder" notification or type something.
4. Once generation stops, the notification will disappear.
- Autocompletion using Ollama language models
- Customizable completion keys
- Inline preview of generated completions
- Configurable maximum tokens predicted
- Configurable prompt window size
- Configurable response preview delay
- Configurable temperature for the model
## Notes
## Installation
You can also download the extension from the release tab of the following Git repository:
- For fastest results, an Nvidia GPU or Apple Silicon is recommended. CPU still works on small models.
- The prompt only sees behind the cursor. The model is unaware of text in front of its position.
- For CPU-only, low end, or battery powered devices, it is highly recommended to disable the `response preview` option, as it automatically triggers the model. *This will cause `continue inline` to be always on.* You can also increase the `preview delay` time.
- If you don't want inline generation to continue beyond the response preview, change the `continue inline` option in settings to false. *This doesn't apply to the command pallete.*
[Fabelous-Autocoder Git Repository](https://gitea.fabelous.app/fabel/Fabelous-Autocoder.git)
To do so, follow these steps:
1. Visit the repository link.
2. Click on the "Releases" tab.
3. Look for the latest release and click on it.
4. Download the extension file compatible with your operating system.
5. Install the extension manually in Visual Studio Code.
After installation, you'll be able to use the Fabelous Autocoder extension in your Visual Studio Code environment.
## Configuration
Fabelous Autocoder is highly customizable, allowing users to configure various settings to fit their needs. To access the configuration settings, follow these steps:
1. Open Visual Studio Code
2. Click on the Settings icon on the sidebar (or press `Ctrl+,`)
3. Search for "Fabelous Autocoder" in the search bar
4. Configure the desired settings
Here are some of the available configuration options:
- `fabelous-autocoder.endpoint`: The endpoint of the Ollama REST API
- `fabelous-autocoder.authentication`: The authentication token for Ollama
- `fabelous-autocoder.model`: The model to use for generating completions
- `fabelous-autocoder.max tokens predicted`: The maximum number of tokens generated by the model
- `fabelous-autocoder.prompt window size`: The size of the prompt in characters
- `fabelous-autocoder.completion keys`: The characters that trigger the autocompletion item provider
- `fabelous-autocoder.response preview`: Whether to show a preview of the generated completion inline
- `fabelous-autocoder.preview max tokens`: The maximum number of tokens generated for the response preview
- `fabelous-autocoder.preview delay`: The time to wait before starting inline preview generation
- `fabelous-autocoder.continue inline`: Whether to continue autocompletion after the inline preview
- `fabelous-autocoder.temperature`: The temperature of the model
- `fabelous-autocoder.keep alive`: The time in minutes before Ollama unloads the model
Note that changing the `completion keys` setting requires a reload of Visual Studio Code.
## Usage
To use Fabelous Autocoder, simply start typing in the editor. When the configured completion keys are pressed, the extension will generate a completion using the configured Ollama model. The completion will be displayed inline with a preview of the generated code. If the `continue inline` setting is enabled, the extension will continue generating completions after the inline preview.
To generate a multi-line completion, press `Enter` after the inline preview. This will open a new editor with the generated completion.
To customize the behavior of the extension, see the Configuration section above.
## License
Fabelous Autocoder is licensed under the CC BY-ND 4.0 license. See the [LICENSE](https://gitea.fabelous.app/fabel/Fabelous-Autocoder/src/branch/main/LICENSE) file for more information.
## Acknowledgments
Fabelous Autocoder was created by [Falko Habel](https://gitea.fabelous.app/fabel). It was inspired by the [Ollama](https://ollama.ai) project.

View File

@ -1,8 +1,7 @@
{
"name": "fabelous-autocoder",
"displayName": "Fabelous Autocoder",
"description": "A simple to use Ollama autocompletion engine with options exposed and streaming functionality",
"version": "0.1.764",
"description": "A simple to use Ollama autocompletion Plugin",
"icon": "icon.png",
"publisher": "fabel",
"license": "CC BY-ND 4.0",

19
src/test.py Normal file
View File

@ -0,0 +1,19 @@
def bubblesort(arr):
for i in range(len(arr)):
for j in range(i + 1, len(arr)):
if arr[j] < arr[i]:
arr[i], arr[j] = arr[j], arr[i] # swap elements
return arr
print(bubblesort([1,2,3,4,5,6,10,6]))
def quicksort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[len(arr)//2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)

BIN
videos/demo.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB