-
Generate Flashlight.json (if not already done):
python main.py
-
Create CSV data file:
python3 create_csv_data.py
This creates
flashlight_data.csvwith the JSON content in theLARGEFIELDcolumn.
-
Import the collection:
Flexo_Performance_Test.postman_collection.json -
Create/Select Environment with:
flexo_endpoint=https://flexo.openmbee.org(or your endpoint)
-
Run Collection with CSV:
- Open Collection Runner
- Go to Data tab
- Click Select File
- Choose
flashlight_data.csv - Set iterations:
- "1. Create Project": 20 iterations
- "2. Commit Project": 20 iterations
- "3. Get Projects": 1 iteration
- "4. Get Elements": 20 iterations
- Click Run
How it works:
- The CSV file contains the large JSON in the
LARGEFIELDcolumn - Pre-request script reads it:
pm.iterationData.get('LARGEFIELD') - Sets environment variable:
pm.environment.set('envLARGE_FIELD', ...) - Request body uses:
{{envLARGE_FIELD}}
./run_with_newman.shThis automatically loads Flashlight.json and runs all tests.
The run_performance_tests.py script provides comprehensive performance testing with sequential and concurrent modes, plus performance charts.
-
Install dependencies (in starforge conda environment):
conda activate starforge pip install aiohttp matplotlib python-dotenv # OR pip install aiohttp matplotlib python-dotenv -
Create
.envfile with your bearer token:echo "FLEXO_TOKEN=your_bearer_token_here" > .env
-
Run tests (simplified - token loaded from .env):
# Sequential mode only python run_performance_tests.py --mode sequential # Concurrent mode only python run_performance_tests.py --mode concurrent # Both modes (default) python run_performance_tests.py
--mode: Test mode -sequential,concurrent, orboth(default:both)--iterations: Number of iterations to run (default: 10)--concurrency: Concurrency level for concurrent mode (default: 5)--endpoint: Flexo API endpoint (default:https://flexo.openmbee.org)--token: Bearer token for authentication (optional - defaults toFLEXO_TOKENfrom.env)--model: Path to test model JSON file (default:Flashlight.json)--output: Output directory for charts and results (default:performance_results)
Simple sequential test (token from .env):
python run_performance_tests.py --mode sequentialSequential test with 20 iterations:
python run_performance_tests.py --mode sequential --iterations 20Concurrent test with 50 iterations and concurrency of 10:
python run_performance_tests.py --mode concurrent --iterations 10 --concurrency 5Both modes with custom output directory:
python run_performance_tests.py --mode both --iterations 15 --output my_resultsOverride token from command line (if needed):
python run_performance_tests.py --mode sequential --token YOUR_TOKENThe script generates:
- Performance charts: Response times by operation, response times over time, and triple count over time
- JSON results: Detailed statistics and metrics in JSON format
- Console summary: Performance statistics printed to console
All outputs are saved to the specified output directory (default: performance_results/).
Each test iteration follows this workflow:
- Create Project: Create a new project
- Commit Model: Commit the test model to the project
- Get Elements: Retrieve elements from the project
- Get Projects: (Once at end) Retrieve all projects
- Track Triples: Monitor triple count over time
The script tracks:
- Response times for each operation (min, max, average)
- Triple count growth over time
- Performance metrics over time