-
Notifications
You must be signed in to change notification settings - Fork 50
Open
Description
Description
When using ContextRelevancy():
Correct Behavior
# INPUTS
question = "List 3 movies about sci-fi in the genre of fiction."
context = ['ex machina', 'i am mother', 'mother/android']
answer = "These three films explore the complex relationship between humans and artificial intelligence. In 'Ex Machina,' a programmer interacts with a humanoid AI, questioning consciousness and morality. 'I Am Mother' features a girl raised by a robot in a post-extinction world, who challenges her understanding of trust and the outside world when a human arrives. 'Mother/Android' follows a pregnant woman and her boyfriend navigating a post-apocalyptic landscape controlled by hostile androids, highlighting themes of survival and human resilience."
# OUTPUTS
score = 0.9459459459459459
metadata = {'relevant_sentences': [{'sentence': 'ex machina', 'reasons': []}, {'sentence': 'i am mother', 'reasons': []}, {'sentence': 'mother/android', 'reasons': []}]}
Incorrect Behavior
# INPUTS
question = "3, sci-fi, fiction, movies"
# same context and answer
# ERROR
ValueError("score (9.81081081081081) must be between 0 and 1")
That error should be handled within ContextRelevancy().
Metadata
Metadata
Assignees
Labels
No labels