[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)

[Showcase](/c/showcase/41)

# Your second brain at the keyboard

190 views · 17 likes · 6 posts


Yazalde Filimone (@yazaldefilimone) · 2025-04-03 · ♥ 6

Hi everyone 👋

A few days ago Guillermo Rauch shared some ideas on X — one of them was about a keyboard autocomplete powered by LLMs, that works across different windows and apps.

That really got me thinking. I wanted to build something like that — but without needing a virtual keyboard.

So I spent a few days exploring, and I built a small prototype in rust using ollama to run the llm locally.

Here’s what it does:

- It works anywhere (browser, terminal, whatsapp…)
- It knows where your cursor is
- It can suggest text based on what you’re writing
- You can accept the suggestion by typing
- You can reject it with `Backspace`
- It remembers what you wrote before, even in other apps

It’s still an early prototype, but already working nicely.

Now I’m wondering, should I make it open source?

I’d love to hear your thoughts and ideas! 



Demo: https://x.com/yazaldefilimone/status/1907832759944441919


Pauline P. Narvas (@pawlean) · 2025-04-03 · ♥ 4

[quote="Yazalde Filimone, post:1, topic:8023, username:yazaldefilimone"]
Now I’m wondering, should I make it open source?
[/quote]

Big fan of open source, so my default is always to just open source it. 😆

ICYMI @kapehe also just shipped our OSS program → 

https://vercel.com/docs/open-source-program


Yazalde Filimone (@yazaldefilimone) · 2025-04-14 · ♥ 1

hi, last 7 days I was thinking... how do you give context beyond just what the user types, right? can't really be your second brain if it doesn’t back up what you’ve seen before...

so, in this second prototype I introduced VISION – IT literally "sees" what you see on your screen.

now you can chat with your own memory. hope you like it...


@pawlean just open:
https://github.com/yazaldefilimone/ghost.ai


BestCodes (@bestcodes) · 2025-04-14 · ♥ 3

This is so cool! I love the idea so much!

[quote="Yazalde Filimone, post:1, topic:8023, username:yazaldefilimone"]
using ollama to run the llm locally
[/quote]

Will it support Gemini models too? I would want that to be an option (Gemini 2.0 is fast and multimodal).
If you are planning to popularize this, variety will help a lot. When you make something that's “bring your own model”, people like to see that it can integrate easily with whatever they're already using.

I would suggest the [AI SDK](https://sdk.vercel.ai) for a unified interface, but I'm pretty sure it doesn't support Rust.
@pawlean Vercel should make a Rust crate for the AI SDK :smiley:


Yazalde Filimone (@yazaldefilimone) · 2025-04-15 · ♥ 1

thanks @bestcodes  the next version is will be amazing

```toml

name = "Yazalde Filimone"
language = "pt"

[llm]
enabled = true
provider = "openai"   # "ollama", "anthropic", "google" etc.
api_key = "sk-xxx"
model = "gpt-4o-mini"

[llm.embed]
enabled = true
model = "nomic-embed-text"

[llm.autocomplete]
enabled = true
model = "gpt-4o-mini-2024-07-18"

[vision]
enabled = false

# [hear]
# enabled = false
# model = "whisper"

[autocomplete]
trigger_key = "Tab"
cancel_key = "Backspace"
cancel_behavior = "full"
apps_ignored = ["code", "zed"]

````


BestCodes (@bestcodes) · 2025-04-15 · ♥ 2

@yazaldefilimone Wow! This is going to be so cool!
I'm going to try this on my Ubuntu laptop later to see if I can contribute to the Linux experience. I think the windowing system on Linux (X11 or Wayland) will change the implementation for Linux devices a bit, but it shouldn't be too hard.
If you are fine with it, I might make PRs to improve model performance. I do a lot of private AI projects and my specialty is making them fast :zany_face: 

Keep up the great work :rocket: