-
Notifications
You must be signed in to change notification settings - Fork 5
Feature - Added new unit_test_writer #104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Conversation
Here are a couple of my thoughts:
I'm assuming you'd want to run with users calling own arguments from another script? So you can ignore import argparse
import logging
from pathlib import Path
import json
def main(csv_path: str, files: list, function_name: str, column_type_override: dict) -> None:
"""Initialise configuration and process CSV files for unit testing.
This function sets up the configuration with paths, filenames, and function names,
and then calls `process_dataframe` to handle the CSV files and generate the test
code.
Parameters
----------
csv_path : str
The path to the directory containing the CSV files.
files : list
A list of filenames to process.
function_name : str
The name of the function to generate tests for.
column_type_override : dict
A dictionary to override column types.
Returns
-------
None
"""
config = Config(
csv_path=csv_path,
files=files,
function_name=function_name,
column_type_override=column_type_override,
)
process_dataframe(config)
def run_from_command_line():
parser = argparse.ArgumentParser(description="Process CSV files for unit testing.")
parser.add_argument("--csv_path", type=str, required=True, help="Path to the CSV files directory.")
parser.add_argument("--files", nargs='+', required=True, help="List of CSV filenames.")
parser.add_argument("--function_name", type=str, required=True, help="Name of the function to generate tests for.")
parser.add_argument("--column_type_override", type=str, required=True, help="Column type overrides in JSON format.")
args = parser.parse_args()
# Convert column_type_override from JSON string to dictionary
column_type_override = json.loads(args.column_type_override)
main(args.csv_path, args.files, args.function_name, column_type_override)
# Example usage:
# if __name__ == "__main__":
# run_from_command_line()Usage InstructionsOption 1: Running from the Command LineUsers can run the script from the command line with their own parameters: python -m rdsa_utils.helpers.unit_test_writer --csv_path "path/to/csv" --files "input1.csv" "expected_output.csv" "fail_output.csv" --function_name "new_function" --column_type_override '{"string": ["period", "reference"], "float": ["602"]}'Option 2: Creating a Custom ScriptUsers can create their own Python script to call the from rdsa_utils.helpers.unit_test_writer import main
main(
csv_path="path/to/csv",
files=["input1.csv", "expected_output.csv", "fail_output.csv"],
function_name="new_function",
column_type_override={"string": ["period", "reference"], "float": ["602"]}
)This approach provides flexibility and allows users to customise the parameters as needed. |
AnneONS
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments, I'm now going to continue my review working inside VS Code :-)
AnneONS
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more comments- mostly about how we identify the type from the csv. But otherwise I'm happy this is ready to go when we've addressed Dom's comments. I spoke to him about how to run it.
Description
This pr introduces a stand alone function which allows for basic unit test code creation, via utilisation of a simple config dict, and uses CSVs files as inputs. The output after running is a new .py file containing the script for the testing.
Improvements available on request:
Peer review
Any new code includes all the following:
Review comments
Suggestions should be tailored to the code that you are reviewing. Provide context.
Be critical and clear, but not mean. Ask questions and set actions.
These might include:
that it is likely to interact with?)
works correctly? Are there additional edge cases/ negative tests to be considered?
Further reading: code review best practices