My logic function will sometimes output the rid of the object set I’ve given it as a data input. It works every other time. In the prompt I explicitly tell it not to output an rid and give it a couple examples of what it should output. Yet it still sometimes gives an rid output. Any hints?
Screenshots attached are of demo data and publicly available data. One shows when it works, and one when it outputs rid.
Thanks–
It seems like both screenshots are identical. What is your desired output?
Oh thanks Taylor, here is an example of the desired output.
It’s going to be challenging to give helpful suggestions without seeing the prompt, but here are some things you might try:
- It seems like this is some sort of categorization problem? Like, “did the engineer make the correct assignment, and if not what should it be?”. In that case, perhaps there’s an object type that could represent the different categories and another object type that could represent evaluations. This way you could constrain the output to be an ontology edit that creates an Evaluation, which references an enum-like Assignment Category. That would constrain the output.
- Consider breaking that LLM block into two. One would return the object set you’re interested in. The second would process that object set. My intuition is that this would have a lower likelihood of confusing the model, but that’s just a gut feeling. In general, that’s an approach that I find easier than iterating on my prompt because it makes the prompt engineering a simpler problem.
Thanks, your first bullet is correct and it is what I have.
I’m always interested in the same object set (the full ~90 list).
I will try breaking it into several blocks though, thanks for the idea.
Hi Taylor, I managed to make it more robust and here’s how:
- As you suggested, I broke the prompt down into several blocks.
- I removed the object set as the ATA chapters (an aviation industry standard) is actually known by GPT, so I didn’t need to provide it as data input.
Thanks for your input
Stef–