Skip to content

Commit

Permalink
chore: release v3.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
eduardocarvp committed Dec 15, 2020
1 parent 281623c commit 1ae7974
Show file tree
Hide file tree
Showing 24 changed files with 3,706 additions and 1,484 deletions.
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,23 @@

# [3.0.0](https://github.com/dreamquark-ai/tabnet/compare/v2.0.1...v3.0.0) (2020-12-15)


### Bug Fixes

* checknan allow string as targets ([855befc](https://github.com/dreamquark-ai/tabnet/commit/855befc5a2cd153509b8c93eccdea866bf094a29))
* deactivate pin memory when device is cpu ([bd0b96f](https://github.com/dreamquark-ai/tabnet/commit/bd0b96f4f61c44b58713f60a030094cc21edb6e3))
* fixed docstring issues ([d216fbf](https://github.com/dreamquark-ai/tabnet/commit/d216fbfa4dadd6c8d4110fa8da0f1c0bdfdccc2d))
* load from cpu when saved on gpu ([451bd86](https://github.com/dreamquark-ai/tabnet/commit/451bd8669038ddf7869843f45ca872ae92e2260d))


### Features

* add new default metrics ([0fe5b72](https://github.com/dreamquark-ai/tabnet/commit/0fe5b72b60e894fae821488c0d4c34752309fc26))
* enable self supervised pretraining ([d4af838](https://github.com/dreamquark-ai/tabnet/commit/d4af838d375128b3d62e17622ec8e0a558faf1b7))
* mask-dependent loss ([64052b0](https://github.com/dreamquark-ai/tabnet/commit/64052b0f816eb9d63008347783cd1fe655be3088))



## [2.0.1](https://github.com/dreamquark-ai/tabnet/compare/v2.0.0...v2.0.1) (2020-10-15)


Expand Down
3 changes: 3 additions & 0 deletions docs/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@
<li class="toctree-l1"><a class="reference internal" href="../generated_docs/README.html#installation">Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="../generated_docs/README.html#what-problems-does-pytorch-tabnet-handles">What problems does pytorch-tabnet handles?</a></li>
<li class="toctree-l1"><a class="reference internal" href="../generated_docs/README.html#how-to-use-it">How to use it?</a></li>
<li class="toctree-l1"><a class="reference internal" href="../generated_docs/README.html#semi-supervised-pre-training">Semi-supervised pre-training</a></li>
<li class="toctree-l1"><a class="reference internal" href="../generated_docs/README.html#useful-links">Useful links</a></li>
<li class="toctree-l1"><a class="reference internal" href="../generated_docs/pytorch_tabnet.html">pytorch_tabnet package</a></li>
</ul>
Expand Down Expand Up @@ -158,6 +159,8 @@ <h1>All modules for which code is available</h1>
<li><a href="pytorch_tabnet/metrics.html">pytorch_tabnet.metrics</a></li>
<li><a href="pytorch_tabnet/multiclass_utils.html">pytorch_tabnet.multiclass_utils</a></li>
<li><a href="pytorch_tabnet/multitask.html">pytorch_tabnet.multitask</a></li>
<li><a href="pytorch_tabnet/pretraining.html">pytorch_tabnet.pretraining</a></li>
<li><a href="pytorch_tabnet/pretraining_utils.html">pytorch_tabnet.pretraining_utils</a></li>
<li><a href="pytorch_tabnet/sparsemax.html">pytorch_tabnet.sparsemax</a></li>
<li><a href="pytorch_tabnet/tab_model.html">pytorch_tabnet.tab_model</a></li>
<li><a href="pytorch_tabnet/tab_network.html">pytorch_tabnet.tab_network</a></li>
Expand Down
92 changes: 81 additions & 11 deletions docs/_modules/pytorch_tabnet/abstract_model.html

Large diffs are not rendered by default.

26 changes: 15 additions & 11 deletions docs/_modules/pytorch_tabnet/callbacks.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@
<li class="toctree-l1"><a class="reference internal" href="../../generated_docs/README.html#installation">Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../generated_docs/README.html#what-problems-does-pytorch-tabnet-handles">What problems does pytorch-tabnet handles?</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../generated_docs/README.html#how-to-use-it">How to use it?</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../generated_docs/README.html#semi-supervised-pre-training">Semi-supervised pre-training</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../generated_docs/README.html#useful-links">Useful links</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../generated_docs/pytorch_tabnet.html">pytorch_tabnet package</a></li>
</ul>
Expand Down Expand Up @@ -318,9 +319,11 @@ <h1>Source code for pytorch_tabnet.callbacks</h1><div class="highlight"><pre>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">msg</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">msg</span> <span class="o">=</span> <span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Stop training because you reached max_epochs = </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="n">trainer</span><span class="o">.</span><span class="n">max_epochs</span><span class="si">}</span><span class="s2">&quot;</span>
<span class="o">+</span> <span class="sa">f</span><span class="s2">&quot; with best_epoch = </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="n">best_epoch</span><span class="si">}</span><span class="s2"> and &quot;</span>
<span class="o">+</span> <span class="sa">f</span><span class="s2">&quot;best_</span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="n">early_stopping_metric</span><span class="si">}</span><span class="s2"> = </span><span class="si">{</span><span class="nb">round</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">best_loss</span><span class="p">,</span> <span class="mi">5</span><span class="p">)</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
<span class="n">msg</span> <span class="o">=</span> <span class="p">(</span>
<span class="sa">f</span><span class="s2">&quot;Stop training because you reached max_epochs = </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="n">trainer</span><span class="o">.</span><span class="n">max_epochs</span><span class="si">}</span><span class="s2">&quot;</span>
<span class="o">+</span> <span class="sa">f</span><span class="s2">&quot; with best_epoch = </span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="n">best_epoch</span><span class="si">}</span><span class="s2"> and &quot;</span>
<span class="o">+</span> <span class="sa">f</span><span class="s2">&quot;best_</span><span class="si">{</span><span class="bp">self</span><span class="o">.</span><span class="n">early_stopping_metric</span><span class="si">}</span><span class="s2"> = </span><span class="si">{</span><span class="nb">round</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">best_loss</span><span class="p">,</span> <span class="mi">5</span><span class="p">)</span><span class="si">}</span><span class="s2">&quot;</span>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">msg</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Best weights from best epoch are automatically used!&quot;</span><span class="p">)</span></div></div>

Expand Down Expand Up @@ -353,7 +356,7 @@ <h1>Source code for pytorch_tabnet.callbacks</h1><div class="highlight"><pre>
<span class="bp">self</span><span class="o">.</span><span class="n">history</span><span class="o">.</span><span class="n">update</span><span class="p">({</span><span class="s2">&quot;lr&quot;</span><span class="p">:</span> <span class="p">[]})</span>
<span class="bp">self</span><span class="o">.</span><span class="n">history</span><span class="o">.</span><span class="n">update</span><span class="p">({</span><span class="n">name</span><span class="p">:</span> <span class="p">[]</span> <span class="k">for</span> <span class="n">name</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">trainer</span><span class="o">.</span><span class="n">_metrics_names</span><span class="p">})</span>
<span class="bp">self</span><span class="o">.</span><span class="n">start_time</span> <span class="o">=</span> <span class="n">logs</span><span class="p">[</span><span class="s2">&quot;start_time&quot;</span><span class="p">]</span>
<span class="bp">self</span><span class="o">.</span><span class="n">epoch_loss</span> <span class="o">=</span> <span class="mf">0.</span></div>
<span class="bp">self</span><span class="o">.</span><span class="n">epoch_loss</span> <span class="o">=</span> <span class="mf">0.0</span></div>

<div class="viewcode-block" id="History.on_epoch_begin"><a class="viewcode-back" href="../../generated_docs/pytorch_tabnet.html#pytorch_tabnet.callbacks.History.on_epoch_begin">[docs]</a> <span class="k">def</span> <span class="nf">on_epoch_begin</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">epoch</span><span class="p">,</span> <span class="n">logs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">epoch_metrics</span> <span class="o">=</span> <span class="p">{</span><span class="s2">&quot;loss&quot;</span><span class="p">:</span> <span class="mf">0.0</span><span class="p">}</span>
Expand All @@ -377,8 +380,9 @@ <h1>Source code for pytorch_tabnet.callbacks</h1><div class="highlight"><pre>

<div class="viewcode-block" id="History.on_batch_end"><a class="viewcode-back" href="../../generated_docs/pytorch_tabnet.html#pytorch_tabnet.callbacks.History.on_batch_end">[docs]</a> <span class="k">def</span> <span class="nf">on_batch_end</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">batch</span><span class="p">,</span> <span class="n">logs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="n">logs</span><span class="p">[</span><span class="s2">&quot;batch_size&quot;</span><span class="p">]</span>
<span class="bp">self</span><span class="o">.</span><span class="n">epoch_loss</span> <span class="o">=</span> <span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">samples_seen</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">epoch_loss</span> <span class="o">+</span> <span class="n">batch_size</span> <span class="o">*</span> <span class="n">logs</span><span class="p">[</span><span class="s2">&quot;loss&quot;</span><span class="p">]</span>
<span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">samples_seen</span> <span class="o">+</span> <span class="n">batch_size</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">epoch_loss</span> <span class="o">=</span> <span class="p">(</span>
<span class="bp">self</span><span class="o">.</span><span class="n">samples_seen</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">epoch_loss</span> <span class="o">+</span> <span class="n">batch_size</span> <span class="o">*</span> <span class="n">logs</span><span class="p">[</span><span class="s2">&quot;loss&quot;</span><span class="p">]</span>
<span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">samples_seen</span> <span class="o">+</span> <span class="n">batch_size</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">samples_seen</span> <span class="o">+=</span> <span class="n">batch_size</span></div>

<span class="k">def</span> <span class="fm">__getitem__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">name</span><span class="p">):</span>
Expand Down Expand Up @@ -413,11 +417,11 @@ <h1>Source code for pytorch_tabnet.callbacks</h1><div class="highlight"><pre>
<span class="n">early_stopping_metric</span><span class="p">:</span> <span class="nb">str</span>
<span class="n">is_batch_level</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">False</span>

<span class="k">def</span> <span class="nf">__post_init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">is_metric_related</span> <span class="o">=</span> <span class="nb">hasattr</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">scheduler_fn</span><span class="p">,</span>
<span class="s2">&quot;is_better&quot;</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">scheduler</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">scheduler_fn</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">optimizer</span><span class="p">,</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">scheduler_params</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">__post_init__</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
<span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">is_metric_related</span> <span class="o">=</span> <span class="nb">hasattr</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">scheduler_fn</span><span class="p">,</span> <span class="s2">&quot;is_better&quot;</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">scheduler</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">scheduler_fn</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">optimizer</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">scheduler_params</span><span class="p">)</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>

<div class="viewcode-block" id="LRSchedulerCallback.on_batch_end"><a class="viewcode-back" href="../../generated_docs/pytorch_tabnet.html#pytorch_tabnet.callbacks.LRSchedulerCallback.on_batch_end">[docs]</a> <span class="k">def</span> <span class="nf">on_batch_end</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">batch</span><span class="p">,</span> <span class="n">logs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
Expand Down
Loading

0 comments on commit 1ae7974

Please sign in to comment.