> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement.
That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)
Man googles offerings are so inconsistent,
batch processing has been available on vertex for a while now,
I dont really get why they have two different offering in vertex and gemini, both are equally inaccessible
It’s because vertex is the “entrrprise” offering that is hippa compliant, etc. That is why vertex only has explicit prompt caching and not implicit, etc. Vertex usage is never used for training or model feedback, but the gemini API does. Basically the Gemini API is Google’s way of being able to move faster like openai and the other foundational model providers, but still having an enterprise offering. Go check Anthropic’s documentation, they even say if you have enterprise or regulatory needs go use bedrock or vertex.
Vertex's offering of Gemini very much does implicit caching, and has always been the case [1]. The recent addition of applying implicit cache hit discounts also works on Vertex, as long as you don't use the `global` endpoint and hit one of the regional endpoints.
[1]: http://web.archive.org/web/20240517173258/https://cloud.goog..., "By default Google caches a customer's inputs and outputs for Gemini models to accelerate responses to subsequent prompts from the customer. Cached contents are stored for up to 24 hours."
You’d still be reasoning using symbols, language is inherently an extension of symbols and memes. Think of a person representing a complex concept in their mind with a symbol and using it for further reasoning
Noticing CF pushing for devs to use DO for eveything over workers these days. Even websocket connections on workers get timed out after ~30s and the recommended way is to use DO for them