-
Notifications
You must be signed in to change notification settings - Fork 4
Example 4 Examine a Machine Learning exercise
Live demo of this article on YouTube
Let's see how peforth assists in learning a TensorFlow neural network lesson on YouTube. You don't need to watch it at the moment, I just let you know where it is.
Tensorflow 10 example3 def add_layer() function (neural network tutorials) https://www.youtube.com/watch?v=Vu_lIJ_Yexk
And its source code on GitHub, https://github.com/MorvanZhou/tutorials/blob/master/tensorflowTUT/tensorflow10_def_add_layer.py -- very short that we can show it all here:
from __future__ import print_function
import tensorflow as tf
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs
It does nothing, only gets the tensorflow module and uses it to define a function. However, looking at the code I think Weights seems like a matrix initialized with random numbers. I eagerly want to see whether I am correct, but nothing I can do.
Now with peforth, we can play a lot! Make a few modifications, like this:
<py> #11
# from __future__ import print_function #22
import tensorflow as tf
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
outport(locals()) #33
return outputs
outport(locals()) #44
</py> \ #55
Those #11, #22, .. #55 marks indicate what are modified.
If you have read previous peforth wiki pages then you have known <py>..</py> envelops
python code when in peforth FORTH environment.
The only thing new is the function outport() which is called two times
above both use locals() as the input argument. What outport() does is to convert
the given python dict into FORTH variables. Lets play with it in prior further explanations.
Install peforth,
pip install peforth
Run peforth
python -m peforth
p e f o r t h v1.03
source code http://github.com/hcchengithub/peforth
Type 'peforth.ok()' enters forth interpreter, 'exit' to come back.
Peforth supports multiple line input. Ctrl-C copy the above code, press Ctrl-D in peforth interpreter to open a multiple line input, press Ctrl-V to paste the code, then press the ending Ctrl-D, as shown below: (if Ctrl-V does not work then press Alt-Space > Edit > Paste instead, or change your DOS Box settings)
p e f o r t h v1.03
source code http://github.com/hcchengithub/peforth
Type 'peforth.ok()' enters forth interpreter, 'exit' to come back.
OK ^D
<py> #11
# from __future__ import print_function #22
import tensorflow as tf
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
outport(locals()) #33
return outputs
outport(locals()) #44
</py> \ #55
^D
OK
Now, let's see what happened
OK words
0 code end-code \ // <selftest> </selftest> bye /// immediate stop
compyle trim indent -indent <py> </py> </pyV> words . cr help
interpret-only compile-only literal reveal privacy (create) : ; (
... snip ...
test-result [all-pass] *** OK dir keys --- add_layer tf
OK
See the last two FORTH words "add_layer" and "tf"? They are FORTH "value"s created by this line:
outport(locals()) #44
At that time the tensorflow module "tf" and the function "add_layer" were only local variables existing in the <py>..</py> python section. This line has brought them out to FORTH for our examinations. Let see them in some different view angles:
Their default appearance, like normal python things, are to show what they are, in fact their 'type':
OK tf . cr
<module 'tensorflow' from 'C:\\Users\\hcche\\AppData\\Local\\Programs\\Python\\Python36\\lib\\site-packages\\tensorflow\\__init__.py'>
OK add_layer . cr
<function compyle_anonymous.<locals>.add_layer at 0x00000208831EFBF8>
We can even see the 'help' of the function we defined! This is a python's intrinsic feature:
OK add_layer py: help(pop())
Help on function add_layer in module peforth.projectk:
add_layer(inputs, in_size, out_size, activation_function=None)
While TensorFlow's help is a real epic!
OK tf py: help(pop())
Help on package tensorflow:
NAME
tensorflow
DESCRIPTION
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
PACKAGE CONTENTS
contrib (package)
core (package)
examples (package)
... snip ....
I always want to see 'type' and 'dir' of things in python to have a clearer picture of them:
OK tf type . cr
<class 'module'>
OK add_layer type . cr
<class 'function'>
OK tf dir . cr
['AggregationMethod', 'Assert', 'AttrValue', 'COMPILER_VERSION',
.... snip ....
'while_loop', 'write_file', 'zeros', 'zeros_initializer', 'zeros_like', 'zeta']
OK add_layer dir . cr
['__annotations__', '__call__', '__class__', '__closure__',
... snip ...
'__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
It's fun to go on playing with each of the above attributes. But we want to look deeper into the add_layer() function. If you have read Example 2 inline python then this line is no problem to you:
add_layer :: ([[1.,2.,3.]],3,2) \ execute add_layer()
\ inputs is [[1.,2.,3.]] which is a list of shape (1,3)
\ in_size is 3
\ out_size is 2
Remember? we have added this line into the add_layer function:
outport(locals()) #33
It makes all python local variables that can be seen at the moment become FORTH values. Let's see:
OK words
0 code end-code \ // \<selftest> \</selftest> bye /// immediate
stop compyle trim indent -indent <py> </py> </pyV> words . cr help
... snip... dir keys --- add_layer tf outputs Wx_plus_b biases Weights
activation_function out_size in_size inputs tf
OK
Note! many new words appeared from "outputs" ... to "inputs tf" in the above words listing. The last "tf" has actually triggered a "reDef tf" warning to which you may have noticed. That indicates the TensorFlow module is seen in add_layer() function too. As the teacher, Morvan, has told us in his previous lessons, we need a TensorFlow Session to see an object. So do we create it like this:
tf :> Session() value sess // ( -- obj ) A TensorFlow session object
And we need to initialize TensorFlow. I am new too so I don't know why either, just do what teacher said:
sess :> run(v('tf').global_variables_initializer()) tib. \ ==> None (<class 'NoneType'>)
Here, tib. command is like . cr but it prints the entire
command line and shows the type of the returned value, good for studying.
v('tf') is the way to access the FORTH value 'tf' within python
code. An equivalent way to do the same thing is like this:
tf sess :> run(pop().global_variables_initializer()) tib. \ ==> None (<class 'NoneType'>)
OK
But we can't use 'tf' directly like this:
OK sess :> run(tf.global_variables_initializer()) tib.
Failed in </py> (compiling=False): name 'tf' is not defined
Body:
push(pop().run(tf.global_variables_initializer()))
OK
Because we are no longer in add_layer() nor the outer <py>...</py> block. However, through our FORTH values we can examine all those once existed local variables in the add_layer() function:
out_size tib. \ ==> 2 (<class 'int'>)
biases tib. \ ==> \<tf.Variable 'Variable_1:0' shape=(1, 2) dtype=float32_ref> (<class 'tensorflow.python.ops.variables.Variable'>)
biases sess :> run(pop()) tib. \ ==> [[ 0.1 0.1]] (<class 'numpy.ndarray'>)
Weights sess :> run(pop()) tib. \ ==>
[[ 0.66803414 0.62326759]
[-0.21582599 0.76987833]
[-1.09043634 0.86057591]] (<class 'numpy.ndarray'>)
Wx_plus_b sess :> run(pop()) tib. \ ==> [[-2.93492675 4.84475183]] (<class 'numpy.ndarray'>)
outputs sess :> run(pop()) tib. \ ==> [[-2.93492675 4.84475183]] (<class 'numpy.ndarray'>)
Due to FORTH programming language's simplicity in syntacs and the freedom it brings, I enjoy doing the exercises with peforth when studying TensorFlow or even python itself.
May the FORTH be with you, and