Skip to content

Commit

Permalink
enforce files end in a newline and only a newline
Browse files Browse the repository at this point in the history
  • Loading branch information
ddbourgin committed Apr 11, 2020
1 parent 06c71c6 commit d9fecd5
Show file tree
Hide file tree
Showing 15 changed files with 15 additions and 29 deletions.
2 changes: 1 addition & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
1 change: 0 additions & 1 deletion docs/numpy_ml.bandits.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ Multi-armed bandits
numpy_ml.bandits.policies

numpy_ml.bandits.trainer

5 changes: 2 additions & 3 deletions docs/numpy_ml.hmm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Hidden Markov models
####################

A `hidden Markov model`_ (HMM) is a generative model for sequences of observations.
A `hidden Markov model`_ (HMM) is a generative model for sequences of observations.

.. _`hidden Markov model` : https://en.wikipedia.org/wiki/Hidden_Markov_model
Expand Down Expand Up @@ -58,7 +58,7 @@ the EM algorithm is known as the `forward-backward`_ / `Baum-Welch algorithm`_.
- :class:`~numpy_ml.hmm.MultinomialHMM`

**References**

.. [1] Ghahramani, Z. (2001). "An Intro to HMMs and Bayesian networks".
*International Journal of Pattern Recognition and AI, 15(1)*: 9-42.
Expand All @@ -67,4 +67,3 @@ the EM algorithm is known as the `forward-backward`_ / `Baum-Welch algorithm`_.
:hidden:

numpy_ml.hmm.MultinomialHMM

3 changes: 1 addition & 2 deletions docs/numpy_ml.neural_nets.activations.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Activations
Activations
===========

Popular (and some not-so-popular) activation functions for use within arbitrary
Expand Down Expand Up @@ -80,4 +80,3 @@ neural networks.
:members:
:undoc-members:
:inherited-members:

1 change: 0 additions & 1 deletion docs/numpy_ml.neural_nets.models.vae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ Variational auto-encoder (``neural_nets.models.vae``)
:undoc-members:
:inherited-members:
:show-inheritance:

3 changes: 1 addition & 2 deletions docs/numpy_ml.neural_nets.modules.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Modules
Modules
========

``BidirectionalLSTM``
Expand Down Expand Up @@ -30,4 +30,3 @@ Modules
.. autoclass:: numpy_ml.neural_nets.modules.WavenetResidualModule
:members:
:undoc-members:

3 changes: 1 addition & 2 deletions docs/numpy_ml.neural_nets.optimizers.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Optimizers
Optimizers
===========
Popular gradient-based strategies for optimizing parameters in neural networks.

Expand Down Expand Up @@ -43,4 +43,3 @@ found via different optimization strategies, see:
:members:
:undoc-members:
:show-inheritance:

6 changes: 2 additions & 4 deletions docs/numpy_ml.neural_nets.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Neural networks
Neural networks
###############
The neural network module includes common building blocks for implementing
modern `deep learning`_ models.
modern `deep learning`_ models.

.. _`deep learning`: https://en.wikipedia.org/wiki/Deep_learning

Expand Down Expand Up @@ -225,5 +225,3 @@ arithmetic, padding, and minibatching.
numpy_ml.neural_nets.models

numpy_ml.neural_nets.utils


1 change: 0 additions & 1 deletion docs/numpy_ml.ngram.mle.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,3 @@
:members:
:undoc-members:
:inherited-members:

3 changes: 1 addition & 2 deletions docs/numpy_ml.preprocessing.general.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
General
General
#######

``FeatureHasher``
Expand Down Expand Up @@ -30,4 +30,3 @@ General

.. automodule:: numpy_ml.preprocessing.general
:members: minibatch

3 changes: 1 addition & 2 deletions docs/numpy_ml.utils.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Utilities
Utilities
#########

.. toctree::
Expand All @@ -15,4 +15,3 @@ Utilities
numpy_ml.utils.windows

numpy_ml.utils.testing

5 changes: 2 additions & 3 deletions numpy_ml/bandits/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Bandits
The `bandit.py` module includes several simple multi-arm bandit
The `bandit.py` module includes several simple multi-arm bandit
environments.

The `policies.py` module implements a number of standard multi-arm bandit
policies.
policies.

1. **Bandits**
- MAB: Bernoulli, Multinomial, and Gaussian payout distributions
Expand All @@ -21,4 +21,3 @@ policies.

<img src="img/EpsilonGreedy.png" align='center' height="400" />
</p>

3 changes: 1 addition & 2 deletions numpy_ml/neural_nets/modules/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
# Modules

The `modules.py` module implements common multi-layer blocks that appear across
many modern deep networks. It includes:
many modern deep networks. It includes:

- Bidirectional LSTMs ([Schuster & Paliwal, 1997](https://pdfs.semanticscholar.org/4b80/89bc9b49f84de43acc2eb8900035f7d492b2.pdf))
- ResNet-style "identity" (i.e., `same`-convolution) residual blocks ([He et al., 2015](https://arxiv.org/pdf/1512.03385.pdf))
- ResNet-style "convolutional" (i.e., parametric) residual blocks ([He et al., 2015](https://arxiv.org/pdf/1512.03385.pdf))
- WaveNet-style residual block with dilated causal convolutions ([van den Oord et al., 2016](https://arxiv.org/pdf/1609.03499.pdf))
- Transformer-style multi-headed dot-product attention ([Vaswani et al., 2017](https://arxiv.org/pdf/1706.03762.pdf))

2 changes: 1 addition & 1 deletion numpy_ml/ngram/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ The `ngram.py` module implements [n-gram models](https://en.wikipedia.org/wiki/N
</p>
<p align="center">
<img src="img/add_smooth.png" height="550" />
</p>
</p>
3 changes: 1 addition & 2 deletions numpy_ml/rl_models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ The `agents.py` module implements a number of standard reinforcement learning (R
can be run on [OpenAI gym](https://gym.openai.com/) environments.

1. **Monte Carlo Methods**
- First-visit Monte Carlo updates (on-policy)
- First-visit Monte Carlo updates (on-policy)
- Incremental weighted importance sampling (off-policy)
- Cross-entropy method ([Mannor, Rubinstein, & Gat, 2003](https://www.aaai.org/Papers/ICML/2003/ICML03-068.pdf))

Expand All @@ -24,4 +24,3 @@ can be run on [OpenAI gym](https://gym.openai.com/) environments.

<img src="img/DynaAgent-Taxi-v2.png" align='center' height="400" />
</p>

0 comments on commit d9fecd5

Please sign in to comment.