robots.txt mistake

Our web severs are getting hammered by bots.
Mostly ones scraping the entire web for „AI training“.

While blocking some bots,
we set „crawl delays“ for others to not come back as often.
It seems that while doing that we made a mistake.
Apparently all settings that are valid for ALL bots need to be repeated when setting something for one specific bot.

That meant that instead of just adjusting the crawl delay for Google,
we disallowed it to crawl the entire site.