Skip to content

Conversation

@RMLYC
Copy link
Collaborator

@RMLYC RMLYC commented Dec 17, 2025

  1. Add gemm baseline profile

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @RMLYC, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a foundational baseline profiling capability for General Matrix Multiply (GEMM) operations within the benchmarking suite. The primary goal is to enable consistent performance measurement of GEMM using the PyTorch backend and to enhance the command-line interface for the GEMM benchmark, providing greater control over profiling parameters such as tuning and matrix transposition.

Highlights

  • GEMM Baseline Profiling: A new baseline_profile method has been added to the matmul_benchmark class in gemm.py. This method is designed to profile the reference GEMM program using the PyTorch backend, providing a standardized way to measure performance.
  • Enhanced GEMM Command Line Arguments: The profile_run.py script has been updated to support additional command-line arguments for the GEMM benchmark. It now includes --tune and conditional flags for --trans_A and --trans_B, allowing for more flexible and detailed profiling configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds baseline profiling for GEMM operations. The changes introduce a baseline_profile method in gemm.py and update the profile_run.py script to handle more GEMM parameters. My review includes feedback on improving the profiling script's output clarity and fixing a bug in how command-line flags are constructed from CSV inputs. I've also suggested making the autotuning flag configurable for better flexibility.

Comment on lines +21 to +28
str(args_dict['dtype']), '--tune'
]

if args_dict.get('trans_A', False):
cmd_args.append('--trans_A')

if args_dict.get('trans_B', False):
cmd_args.append('--trans_B')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There are two issues with the current implementation for building command arguments:

  1. The --tune flag is added unconditionally. This prevents running benchmarks without autotuning. It's better to make this configurable via the input CSV.
  2. The checks for trans_A and trans_B are incorrect. csv.DictReader reads all values as strings. A check like if args_dict.get('trans_A', False) will be True even if the value is the string 'False', leading to incorrect benchmark execution. The check should explicitly compare against 'true'.

The suggested change below fixes both issues by making all three flags (tune, trans_A, trans_B) conditional and using a safe string comparison. This will allow you to control these flags from your gemm.csv file.

Suggested change
str(args_dict['dtype']), '--tune'
]
if args_dict.get('trans_A', False):
cmd_args.append('--trans_A')
if args_dict.get('trans_B', False):
cmd_args.append('--trans_B')
str(args_dict['dtype'])
]
if args_dict.get('tune', 'false').lower() == 'true':
cmd_args.append('--tune')
if args_dict.get('trans_A', 'false').lower() == 'true':
cmd_args.append('--trans_A')
if args_dict.get('trans_B', 'false').lower() == 'true':
cmd_args.append('--trans_B')

Comment on lines +38 to +41
def baseline_profile(self, *inputs, warmup=100, rep=100, device="cuda:0"):

print("===== Profiling MatMul torch backend =====")
return super().baseline_profile(self.ref_program, *inputs, backend="torch", warmup=warmup, rep=rep, device=device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The print statement here is redundant, as the parent baseline_profile method already prints a descriptive header (===== Profiling {backend} =====). Removing this line and the extra blank line will make the output cleaner and the code more concise.

Suggested change
def baseline_profile(self, *inputs, warmup=100, rep=100, device="cuda:0"):
print("===== Profiling MatMul torch backend =====")
return super().baseline_profile(self.ref_program, *inputs, backend="torch", warmup=warmup, rep=rep, device=device)
def baseline_profile(self, *inputs, warmup=100, rep=100, device="cuda:0"):
return super().baseline_profile(self.ref_program, *inputs, backend="torch", warmup=warmup, rep=rep, device=device)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant