In-database large language models (LLMs) greatly simplify the development of GenAI applications. You can quickly benefit from generative AI; you don’t need to select an external LLM; and you don’t have to consider integration complexity and costs or the availability of an external LLM in various data centers.
Build generative AI apps for a wide range of use cases across clouds
Help reduce costs and risks
HeatWave Vector Store lets you combine the power of LLMs with your proprietary data to help get more accurate and contextually relevant answers than using models trained only on public data. The vector store ingests documents in a variety of formats, including PDF, and stores them as embeddings generated via an embedding model. For a given user query, the vector store helps identify the most similar documents by performing a similarity search against the stored embeddings and the embedded query. These documents are used to augment the prompt given to the LLM so that it provides a more contextual answer for your business.
No AI expertise is required
Costs and risks can be reduced
Vector processing accelerates with the in-memory and scale-out architecture of HeatWave. HeatWave supports a new native VECTOR data type, letting you use standard SQL to create, process, and manage vector data.
A new HeatWave Chat interface lets you have contextual conversations augmented by proprietary documents in the vector store, using natural language.