Difference between Groq and AWS Bedrock?

For a customer need, I was researching AWS Bedrock, however I’m not sure how it differs from Groq. Both provide models that are open source…?

I am aware that AWS Bedrock offers certain extra features, such as the ability to generate images, but how does this relate to text generation?

If so, why the outrageous cost?

6 Likes

If you are currently using the AWS environment and don’t want to handle other service providers, Bedrock makes sense. In essence, they resemble an LLM broker.

4 Likes

Bedrock as a LOT of different features. It basically has a few different components:

  • hosting open source and some closed source models (e.g. Amazon Titan)
  • finetuning, hosting private models, direct integration with AWS S3 and IAM for data hosting and security
  • built-in evaluation and benchmark datasets
  • monitoring, integration with AWS CloudWatch and CloudTrail, logging prompts and answers
  • Guardrails for managing, well, guardrails for generation, integrated with monitoring
  • Knowledge Bases, basically a serverless, fully managed RAG (by default based on AWS OpenSearch Serverless), with lots of integrations (files on S3 and connectors to e.g. Confluence)
  • Agents for building whole agent apps, integrated with e.g. AWS Lambda (running short serverless functions); also integrated with whole prompt flows, so that you can chain them
  • integrated with Amazon Q, with which you can build and host chatbots based on your models
  • all AWS certifications and compliance features, which may be necessary for some clients

“Exorbitant price” depends on what you need. If you host a private copy of a model or something customized in general, you need the provisioned throughput mode, which also guarantees you a certain SLA. Also note that price depends on a region.

In short, if you want AWS and all the enterprise features (in particular security and auditing), go for it. Otherwise there may be easier to use solutions.

3 Likes

Groq’s shtick is they run on SRAM based custom accelerators, which allows for very low latencies and high token/s. Otherwise, they’re just the same as all the other commodity inference providers like firework, deepinfra, et al. They serve open source models to a price point.

AWS Bedrock’s thing is similar, but as mentioned they have closed/semi closed models from Anthropic and Cohere served as well, and they’re really easy to integrate if you’re already in the AWS ecosystem, speaking from firsthand experience. Price wise not particularly competitive though, because AWS. (AWS can be cheap, but you really have to know what you’re doing). AWS is serving models on GPUs or possibly? their custom inferentia chips, which have higher latency and lower throughput, but much more scale.

Basically AWS is pricing to AWS prices, while Groq and everyone else is in a race to the bottom burning VC money so third party inference costs have cratered. Groq just has some custom hardware that’s their unique shtick (and frankly Cerebras’s approach is much better).

2 Likes

I bedrock the groq ass

1 Like

If you are currently using the AWS environment and don’t want to handle other service providers, Bedrock makes sense. In essence, they resemble an LLM broker.

If you don’t mind me asking, is it possible to actually have & run a totally new custom in Bedrock? It’s a model that’s not architecturally like the foundational models (FMs), their fine-tuned versions and nor Amazon Titan? If yes, where do I store the new model file?