Skip to content

Evaluate the observability of artificial neural network weights and biases from arbitrary measured outputs.

Notifications You must be signed in to change notification settings

BenCellini/network-observability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

network-observability

Evaluate the observability of artificial neural network weights and biases from arbitrary measured neurons. Also has code to visualize feedforward network structures in FCNN style.

This repo relies heavily on my other project for empirical observability: https://github.com/vanbreugel-lab/pybounds

Examples

Start here for a basic example on a PyTorch model. network_observability_example.ipynb

This example creates a simple feedforward PyTorch model with linear output functions and randomized weights (not trained for a specific purpose). Then, it uses random sets of inputs to construct an observability matrix given measurements from specified neurons (output or hidden layer neurons). The Fisher information + inverse is computed and used to assess the observability of each network weight/bias. The network is visualized such that the measured neurons are indicated (green) and the connections are colored by their observability level (red equals more observable, blue less observable).

network_observability.png

network_observability.png

Also see more in depth visualization examples: network_visualization_example.ipynb network_visualization.png

About

Evaluate the observability of artificial neural network weights and biases from arbitrary measured outputs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published