LLM endpoints policies
... / LLM endpoints policies
BMPCreated with Sketch.BMPZIPCreated with Sketch.ZIPXLSCreated with Sketch.XLSTXTCreated with Sketch.TXTPPTCreated with Sketch.PPTPNGCreated with Sketch.PNGPDFCreated with Sketch.PDFJPGCreated with Sketch.JPGGIFCreated with Sketch.GIFDOCCreated with Sketch.DOC Error Created with Sketch.
Question

LLM endpoints policies

by
Topo Gigio
Created on 2025-10-22 09:09:54 (edited on 2025-11-04 08:47:54) in AI and Machine Learning OVHcloud

Hi,

I'm planning to develop a chat assistant for my customer.
I'd like to use OVH's Ai Endpoint service to provide answers to users based on the information contained in the RAG.
The RAG's content concerns information on ingredients for formulating food supplements.
The user could ask about the recommended dosages for a particular ingredient or the best ingredients to use for certain health conditions.
The purpose of these assistant is not to provide personalized medical advice, but rather information for help the user formulate commercial products (food supplements).

Based on this information, can you tell me if this project violates the OVH Ai Endpoint service usage policies?

 

Thanks