Skip to content

Context Relevancy issue with score not between 0 and 1. #80

@jp-agenta

Description

@jp-agenta

Description

When using ContextRelevancy():

Correct Behavior

# INPUTS
question = "List 3 movies about sci-fi in the genre of fiction."
context = ['ex machina', 'i am mother', 'mother/android']
answer = "These three films explore the complex relationship between humans and artificial intelligence. In 'Ex Machina,' a programmer interacts with a humanoid AI, questioning consciousness and morality. 'I Am Mother' features a girl raised by a robot in a post-extinction world, who challenges her understanding of trust and the outside world when a human arrives. 'Mother/Android' follows a pregnant woman and her boyfriend navigating a post-apocalyptic landscape controlled by hostile androids, highlighting themes of survival and human resilience."

# OUTPUTS
score = 0.9459459459459459
metadata = {'relevant_sentences': [{'sentence': 'ex machina', 'reasons': []}, {'sentence': 'i am mother', 'reasons': []}, {'sentence': 'mother/android', 'reasons': []}]}

Incorrect Behavior

# INPUTS
question = "3, sci-fi, fiction, movies"
# same context and answer

# ERROR
ValueError("score (9.81081081081081) must be between 0 and 1")

That error should be handled within ContextRelevancy().

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions