Your second brain at the keyboard

Hi everyone :waving_hand:

A few days ago Guillermo Rauch shared some ideas on X — one of them was about a keyboard autocomplete powered by LLMs, that works across different windows and apps.

That really got me thinking. I wanted to build something like that — but without needing a virtual keyboard.

So I spent a few days exploring, and I built a small prototype in rust using ollama to run the llm locally.

Here’s what it does:

  • It works anywhere (browser, terminal, whatsapp…)
  • It knows where your cursor is
  • It can suggest text based on what you’re writing
  • You can accept the suggestion by typing
  • You can reject it with Backspace
  • It remembers what you wrote before, even in other apps

It’s still an early prototype, but already working nicely.

Now I’m wondering, should I make it open source?

I’d love to hear your thoughts and ideas!

Demo: https://x.com/yazaldefilimone/status/1907832759944441919

6 Likes

Big fan of open source, so my default is always to just open source it. :laughing:

ICYMI @kapehesevilleja-verc also just shipped our OSS program →

4 Likes

hi, last 7 days I was thinking… how do you give context beyond just what the user types, right? can’t really be your second brain if it doesn’t back up what you’ve seen before…

so, in this second prototype I introduced VISION – IT literally “sees” what you see on your screen.

now you can chat with your own memory. hope you like it…

@pawlean just open:

1 Like

This is so cool! I love the idea so much!

Will it support Gemini models too? I would want that to be an option (Gemini 2.0 is fast and multimodal).
If you are planning to popularize this, variety will help a lot. When you make something that’s “bring your own model”, people like to see that it can integrate easily with whatever they’re already using.

I would suggest the AI SDK for a unified interface, but I’m pretty sure it doesn’t support Rust.
@pawlean Vercel should make a Rust crate for the AI SDK :smiley:

3 Likes

thanks @bestcodes the next version is will be amazing


name = "Yazalde Filimone"
language = "pt"

[llm]
enabled = true
provider = "openai"   # "ollama", "anthropic", "google" etc.
api_key = "sk-xxx"
model = "gpt-4o-mini"

[llm.embed]
enabled = true
model = "nomic-embed-text"

[llm.autocomplete]
enabled = true
model = "gpt-4o-mini-2024-07-18"

[vision]
enabled = false

# [hear]
# enabled = false
# model = "whisper"

[autocomplete]
trigger_key = "Tab"
cancel_key = "Backspace"
cancel_behavior = "full"
apps_ignored = ["code", "zed"]

1 Like

@yazaldefilimone Wow! This is going to be so cool!
I’m going to try this on my Ubuntu laptop later to see if I can contribute to the Linux experience. I think the windowing system on Linux (X11 or Wayland) will change the implementation for Linux devices a bit, but it shouldn’t be too hard.
If you are fine with it, I might make PRs to improve model performance. I do a lot of private AI projects and my specialty is making them fast :zany_face:

Keep up the great work :rocket:

2 Likes