Well, since v0 built the AI part without significant problems, I didn’t have to read the docs very much. I stumbled over some problems with SDK versions and functions being named differently (v0 used the wrong ones).
Also some resources how to deal with time limits and AI generation would be interesting. Initial generation on deployment worked because it can take over 60s, but ISR refresh failed because of the 60s limit. I got around it by parallelizing the queries, but I wonder what happens if image generation has a bad day.
This can potentially lead to a lot of regeneration requests burning through tokens quickly. (Reminds me of the case a few years ago where one Vercel user burned through money with some badly configured Stripe webhooks and rauchg hat to intervene).
ISR to cache “heavy” AI generations + timeouts can have the potential for a denial of wallet attack. Maybe your dev team can think about something to avoid this fallacy.
Also testing this in v0 might be a problem because it would probably do the generation on every iteration. Any ideas on how to test AI implementations on v0 with mocked AI?