@@ -16,7 +16,7 @@ can use these flags to enable logging. Various types of flags exposed are:
1616* :code: `debug_tuner `: print debug spew for the tuner multithreading behavior.
1717
1818
19- In order to use enable these flags, you need to call :code: `tc.GlobalDebugInit `
19+ In order to use enable these flags, you need to call :code: `tc.SetDebugFlags `
2020and set the proper flags to :code: `true `. All of these flags are :code: `boolean `
2121flags that take values :code: `true ` or :code: `false `.
2222
@@ -28,14 +28,14 @@ Example usage
2828 import tensor_comprehensions as tc
2929 import torch
3030
31- tc.GlobalDebugInit (debug_tc_mapper = True , debug_lang = False )
31+ tc.SetDebugFlags (debug_tc_mapper = True , debug_lang = False )
3232
3333 matmul = tc.define(tc.database[' matmul' ][' lang' ], name = ' matmul' )
3434 mat1, mat2 = torch.randn(3 , 4 ).cuda(), torch.randn(4 , 5 ).cuda()
3535 out = matmul(mat1, mat2)
3636
3737 In above example, when the TC executes, we will see the TC mapper information.
38- You can chose to set any number of flags but the :code: `tc.GlobalDebugInit ` should
38+ You can chose to set any number of flags but the :code: `tc.SetDebugFlags ` should
3939only be called once.
4040
4141Printing TC generated CUDA code
@@ -50,7 +50,7 @@ and the generated CUDA code will be printed on command line.
5050 import tensor_comprehensions as tc
5151 import torch
5252
53- tc.GlobalDebugInit (dump_cuda = True )
53+ tc.SetDebugFlags (dump_cuda = True )
5454
5555 matmul = tc.define(tc.database[' matmul' ][' lang' ], name = ' matmul' )
5656 mat1, mat2 = torch.randn(3 , 4 ).cuda(), torch.randn(4 , 5 ).cuda()
0 commit comments