Skip to content

Commit 34413eb

Browse files
authored
Merge pull request #148 from Atry/2.0-readme
Update README for DeepLearning.scala 2.0.0
2 parents 15f0197 + daf0e05 commit 34413eb

File tree

1 file changed

+57
-22
lines changed

1 file changed

+57
-22
lines changed

README.md

Lines changed: 57 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -5,59 +5,94 @@
55
[![Latest version](https://index.scala-lang.org/thoughtworksinc/deeplearning.scala/plugins-builtins/latest.svg)](https://index.scala-lang.org/thoughtworksinc/deeplearning.scala/plugins-builtins)
66
[![Scaladoc](https://javadoc.io/badge/com.thoughtworks.deeplearning/deeplearning_2.11.svg?label=scaladoc)](https://javadoc.io/page/com.thoughtworks.deeplearning/deeplearning_2.11/latest/com/thoughtworks/deeplearning/package.html)
77

8-
**DeepLearning.scala** is a DSL for creating complex neural networks.
9-
10-
With the help of DeepLearning.scala, regular programmers are able to build complex neural networks from simple code. You write code almost as usual, the only difference being that code based on DeepLearning.scala is [differentiable](https://colah.github.io/posts/2015-09-NN-Types-FP/), which enables such code to evolve by modifying its parameters continuously.
8+
**DeepLearning.scala** is a simple library for creating complex neural networks from object-oriented and functional programming constructs.
9+
10+
* DeepLearning.scala runs on JVM, can be used either in standalone JVM applications or a Jupyter Notebooks.
11+
* DeepLearning.scala is expressive. Various types of neural network layers can be created by composing `map`, `reduce` or other higher order functions.
12+
* DeepLearning.scala supports plugins. There are various plugins providing algorithms, models, hyperparameters or other features.
13+
* All the above features are statically type checked.
1114

1215
## Features
1316

14-
### Differentiable basic types
17+
### Differentiable programming
1518

16-
Like [Theano](http://deeplearning.net/software/theano/) and other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports [floats](https://javadoc.io/page/com.thoughtworks.deeplearning/unidoc_2.11/1.0.0/com/thoughtworks/deeplearning/DifferentiableFloat$.html), [doubles](https://javadoc.io/page/com.thoughtworks.deeplearning/unidoc_2.11/1.0.0/com/thoughtworks/deeplearning/DifferentiableDouble$.html), [GPU-accelerated N-dimensional arrays](https://javadoc.io/page/com.thoughtworks.deeplearning/unidoc_2.11/1.0.0/com/thoughtworks/deeplearning/DifferentiableINDArray$.html), and calculates derivatives of the weights in the formulas.
19+
Like other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports [floats](https://javadoc.io/page/com.thoughtworks.deeplearning/deeplearning_2.11/latest/com/thoughtworks/deeplearning/plugins/FloatLayers.html), [doubles](https://javadoc.io/page/com.thoughtworks.deeplearning/deeplearning_2.11/latest/com/thoughtworks/deeplearning/plugins/DoubleLayers.html), [GPU-accelerated N-dimensional arrays](https://javadoc.io/page/com.thoughtworks.deeplearning/deeplearning_2.11/latest/com/thoughtworks/deeplearning/plugins/INDArrayLayers.html), and calculates derivatives of the weights in the formulas.
1720

18-
### Differentiable ADTs
21+
### Dynamic neural networks
1922

20-
Neural networks created by DeepLearning.scala support [ADT](https://en.wikipedia.org/wiki/Algebraic_data_type) data structures (e.g. [HList](https://javadoc.io/page/com.thoughtworks.deeplearning/unidoc_2.11/latest/com/thoughtworks/deeplearning/DifferentiableHList$.html) and [Coproduct](https://javadoc.io/page/com.thoughtworks.deeplearning/unidoc_2.11/latest/com/thoughtworks/deeplearning/DifferentiableCoproduct$.html)), and calculate derivatives for these data structures.
23+
Unlike some other deep learning toolkits, the structure of neural networks in DeepLearning.scala is dynamically determined during running. Our neural networks are programs. All Scala features, including functions, expressions and control flows, are available in neural networks.
2124

22-
### Differentiable control flow
25+
For example:
2326

24-
Neural networks created by DeepLearning.scala may contain control flows like `if`/`else`/`match`/`case` in a regular language. Combined with ADT data structures, you can implement arbitrary algorithms inside neural networks, and still keep the variables within those algorithms differentiable and trainable.
27+
``` scala
28+
def ordinaryScalaFunction(a: INDArray): Boolean = {
29+
a.signnum.sumT > math.random
30+
}
2531

26-
### Composability
32+
def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer(monadic[Do] {
33+
val outputOfLayer1 = layer1(input).forward.each
34+
if (ordinaryScalaFunction(outputOfLayer1.data)) {
35+
dynamicallySelectedLayer2(outputOfLayer1).forward.each
36+
} else {
37+
dynamicallySelectedLayer3(outputOfLayer1).forward.each
38+
}
39+
})
40+
```
2741

28-
Neural networks created by DeepLearning.scala are composable. You can create large networks by combining smaller networks. If two larger networks share sub-networks, the weights for a shared sub-network that are trained in one super-network will also affect the other super-network.
42+
The above neural network will go into different subnetworks according to an ordinary Scala function.
2943

30-
### Static type system
44+
With the ability of creating dynamic neural networks, regular programmers are able to build complex neural networks from simple code. You write code almost as usual, the only difference being that code based on DeepLearning.scala is differentiable, which enables such code to evolve by modifying its parameters continuously.
3145

32-
All of the above features are statically type checked.
46+
### Functional programming
3347

34-
## Roadmap
48+
DeepLearning.scala 2.0 is based on Monads, which are composable, thus a complex layer can be built from primitive operators or higher order functions like `map`/`reduce`. Along with the Monad, we provide an Applicative type class, to perform multiple calculations in parallel.
3549

36-
### v1.0
50+
For example, the previous example can be rewritten in higher-order function style as following:
3751

38-
Version 1.0 is the current version with all of the above features. The final version will be released in March 2017.
52+
``` scala
53+
def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer {
54+
layer1(input).forward.flatMap { outputOfLayer1 =>
55+
if (ordinaryScalaFunction(outputOfLayer1.data)) {
56+
dynamicallySelectedLayer2(outputOfLayer1).forward
57+
} else {
58+
dynamicallySelectedLayer3(outputOfLayer1).forward
59+
}
60+
}
61+
}
62+
```
3963

40-
### v2.0
64+
The key construct in DeepLearning.scala 2.0 is the dependent type class [DeepLearning](https://javadoc.io/page/com.thoughtworks.deeplearning/deeplearning_2.11/latest/com/thoughtworks/deeplearning/DeepLearning.html), which witnesses a differentiable expression. In other words, given the `DeepLearning` type class instance, you can activate the deep learning ability of any type.
65+
66+
### Object-oriented programming
67+
68+
The code base of DeepLearning.scala 2.0 is organized according to Dependent Object Type calculus (DOT). All features are provided as mixin-able plugins. A plugin is able to change APIs and behaviors of all DeepLearning.scala types. This approach not only resolves [expression problem](https://en.wikipedia.org/wiki/Expression_problem), but also gives plugins the additional ability of **virtually depending** on other plugins.
4169

42-
* Support `for`/`while` and other higher-order functions on differentiable `Seq`s.
43-
* Support `for`/`while` and other higher-order functions on GPU-accelerated differentiable N-dimensional arrays.
70+
For example, when a plugin author is creating the [Adagrad](https://gist.github.com/Atry/89ee1baa4c161b8ccc1b82cdd9c109fe#file-adagrad-sc) optimizer plugin, he does not have to explicitly call functions related to learning rate. However, once a plugin user enables both the `Adagrad` plugin and the [FixedLearningRate](https://gist.github.com/Atry/1fb0608c655e3233e68b27ba99515f16#file-readme-ipynb) plugin, then computation in `FixedLearningRate` will get called eventually when the `Adagrad` optimization is executed.
4471

45-
Version 2.0 will be released in Q2 2017.
72+
## Roadmap
73+
74+
### v2.0
75+
76+
Version 2.0 is the current version with all of the above features.
4677

4778
### v3.0
4879

49-
* Support using custom `case class`es inside neural networks.
80+
* Support `map`/`reduce` and other higher-order functions on GPU-accelerated differentiable N-dimensional arrays.
5081
* Support distributed models and distributed training on [Spark](https://spark.apache.org/).
5182

5283
Version 3.0 will be released in late 2017.
5384

5485
## Links
5586

87+
* [Homepage](http://deeplearning.thoughtworks.school/)
5688
* [Getting started](https://thoughtworksinc.github.io/DeepLearning.scala/demo/GettingStarted.html)
5789
* [Scaladoc](https://javadoc.io/page/com.thoughtworks.deeplearning/deeplearning_2.11/latest/com/thoughtworks/deeplearning/package.html)
5890

5991
## Acknowledgements
6092

93+
DeepLearning.scala is sponsored by [ThoughtWorks](https://www.thoughtworks.com/).
94+
6195
DeepLearning.scala is heavily inspired by my colleague [@MarisaKirisame](https://github.com/MarisaKirisame). Originally, we worked together on a prototype of a deep learning framework, and eventually split our work into this project and [DeepDarkFantasy](https://github.com/ThoughtWorksInc/DeepDarkFantasy).
96+
Other contributors can be found at [here](https://github.com/ThoughtWorksInc/DeepLearning.scala/graphs/contributors).
6297

63-
[@milessabin](https://github.com/milessabin)'s [shapeless](https://github.com/milessabin/shapeless) provides a solid foundation for type-level programming as used in DeepLearning.scala.
98+
[@milessabin](https://github.com/milessabin)'s [shapeless](https://github.com/milessabin/shapeless) provides a solid foundation for type-level programming used in DeepLearning.scala.

0 commit comments

Comments
 (0)