Palantir_models.models GenericCompletionLanguageModel Documentation

Good day everyone !
I have a very basic question:

Where can I find detailed documentation about the palantir_models.models module and its classes/functions (i.e. GenericCompletionLanguageModel) ?

For instance, if I am using import spacy in my Jupyter notebook, I can navigate to https://spacy.io/ and get all the information I need. Thank you all for any feedback.

Hi @AtWorkDS :waving_hand:
Docs for palantir_models can be found in our documentation here:

Additionally there is the Model Catalog application which has detailed information on things like context window, knowledge cutoff date, and more details about LLMs that Palantir provides.

@Jim Thank you for the follow up. I am aware of the links you shared and the model catalog; but neither provide information with regards to what model parameters (i.e. temperature, functions, top_p, max_tokens, etc..) can we use when leveraging GenericCompletionLanguageModel. Please let me know if I am missing something, thank you.

Hello,

I have also been looking for the GenericCompletionLanguageModel Documentation. I have asked several people about this, and no one has given me a clear answer. I’m assuming it doesn’t exist as of today. However, perhaps I am mistaken. Here is what I have gathered so far if it is of use to you. I ran this code in a jupyter notebook.


from palantir_models.models import GenericCompletionLanguageModel
from language_model_service_api.languagemodelservice_api_completion_v3 import GenericCompletionRequest
model = GenericCompletionLanguageModel.get("Llama_3_3_70b_Instruct")
prompt = "Why is the sky blue?"
request = GenericCompletionRequest(prompt)
llama_response = model.create_completion(request)
print(f"This is request: {request}")

The output is from this code is “This is request: GenericCompletionRequest(prompt=‘Why is the sky blue?’, temperature=None, max_tokens=None, stop_sequences=None)”

I think this means that the only parameters we can pass to GenericCompletionRequest are temperature, max_tokens, and stop_sequences.

Here is an example adjusting the parameters:

request = GenericCompletionRequest(prompt, temperature=0, stop_sequences=['.'], max_tokens=150)

Hopefully, in the coming months, Palantir will add the ability to pass more parameters to GenericCompletionRequest like JSON_mode. What other things are you hoping to learn about GenericCompletionLanguageModel?

@jenny thank you for the follow up. I ended up using dir/help in Jupyter Workspace to learn about these classes; it is hard because some are hybrid product between palantir and third parties models. Like you, I wish more documentation was available.

@AtWorkDS Using dir/help is a great idea. Thanks for the heads up on that.