My goal is to give an AI agent access to Vercel’s bash-tool with abilities and skills to call APIs (e.g., a slack API). That is, the agent executes commands in the vercel sandbox which makes requests using API keys. But my problem is that I don’t want these keys to ever be leaked or have the possibility of being leaked.
I tried these options:
- modify
/etc/hoststo forward api routes to custom routes (on my server) and then authenticate requests on my server.- Unsuccessful bc we can’t edit the
/etc/hosts
- Unsuccessful bc we can’t edit the
-
Run a proxy server in Python within the sandbox which works with the following issues:
- the llm has to “know” about the proxy. This is problematic bc skills don’t assume proxies unless I rewrite them to. Even a directive saying “use proxies where you would use xxx api” is not ideal.
- Running the proxy server has to happen before every execution in the background. Idek what the effects of that are wrt billing or latency. It might not be an issue.
The ideal situation as I see it is that the LLM calling tools doesn’t know it’s in a sandbox or has a proxy. It just calls any APIs mentioned in its skills or directions like it would from standard docs. It would also be nice to have security measures in place such that we can pre-approve routes and deny traffic outside those approved ones.
Curious what people think! I’m hoping there’s an obvious solution in front of me ![]()
