Skip to content

Commit

Permalink
Update Readme.md (oneapi-src#384)
Browse files Browse the repository at this point in the history
* Updating License  file to no date in the title /*
 * Copyright (c) 2020 Intel Corporation
 *
 * This program and the accompanying materials are made available under the
 * terms of the The MIT License which is available at
 * https://opensource.org/licenses/MIT.
 *
 * SPDX-License-Identifier: MIT
 */

* Update README.md

* Fix FPGA entries

* Update README.md

Updates per request of sranikonda

* Update README.md

* removing duplicate samples after transfering to dwarves folders

* Update Makefile.win

changing compiler name from "dpcpp-cl" to "dpcpp"

* Update Makefile.win

* Update Makefile.win.fpga

* Update CMakeLists.txt

* Update CMakeLists.txt

* Update CMakeLists.txt

* Update README.md

* Update README.md

* Update from Legal Approval of 10/05/2020

* Create README.md

* Add files via upload

* Update README.md

minor modifications to content, purpose and key implementation details.

* Update sample.json

aligned description with readme

* Update README.md

reshuffled parts of the purpose and implementation details and abstracted a few key concepts into better summaries.

* Update sample.json

synched description with readme.

* Update README.md

* Fixing conflicts

* ng conflicts

* Create README.md

* Create sample.json

* Create sample.json

* removing franmeworks folder

* fixing a issue

* Fixing an issue

* Update hyperlink

* update a hyperlin

* fixing hyperlink

* updatereadme.md files w/ grammer corrections

* updatereadme.md files w/ grammer & spelling corrections

* Audrey's edits to fpga_compile's README

* Disambiguate "compile time" in fpga_compile README

* updatereadme.md files w/ grammer & spelling corrections

* updatereadme.md files w/ grammer corrections

* updatereadme.md files w/ grammer corrections

* updatereadme.md files w/ grammer corrections

* updatereadme.md files w/ grammer corrections

* updatereadme.md files w/ grammer corrections

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* updatereadme.md files w/ corrected window run commands

* Update README.md

* Remove license files from all samples, except root

* Readme changes based pn Lincense file requirements

* removing unused folders

* Minor fixits to FPGA project template READMEs

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* Fix some grammar-check-induced ambiguity in FPGA Reference Design READMEs

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* Correct name of FPGA PAC D5005 in reference design README files.

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* Fix errors introduced by grammar check in FPGA Design Pattern READMEs

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* Fix errors introduced by Grammarly in FPGA Tools and GettingStarted code samples

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* Fix auto-corrections in READMEs for FPGA Features code samples

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* repalcing license file

* repalcing license file

* correcting formatting

* Update README.md

* Update README.md

* Update README.md

* Fix Intel FPGA PAC D5005 name in FPGA REAMDEs

Signed-off-by: Audrey Kertesz <audrey.kertesz@intel.com>

* Update readme titles

* Updateing readme body

* Update ISO2DFD readme body

* Updating hyperlinks

Co-authored-by: akertesz <67655634+akertesz@users.noreply.github.com>
Co-authored-by: tomlenth <tom.f.lenth@intel.com>
Co-authored-by: Audrey Kertesz <audrey.kertesz@intel.com>
  • Loading branch information
4 people committed Jan 12, 2021
1 parent 2293ee5 commit 0f993ff
Show file tree
Hide file tree
Showing 277 changed files with 1,985 additions and 3,665 deletions.
1 change: 0 additions & 1 deletion .github/images/README.md

This file was deleted.

7 changes: 0 additions & 7 deletions AI-and-Analytics/End-to-end-Workloads/Census/License.txt

This file was deleted.

17 changes: 10 additions & 7 deletions AI-and-Analytics/End-to-end-Workloads/Census/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# End-to-end machine learning workload: Census
# End-to-end machine learning workload: `Census` Sample

This sample code illustrates how to use Intel® Distribution of Modin for ETL operations and ridge regression algorithm from the Intel® oneAPI Data Analytics Library (oneDAL) accelerated scikit-learn library to build and run an end to end machine learning workload. Both Intel Distribution of Modin and oneDAL accelerated scikit-learn libraries are available together in [Intel AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html). This sample code demonstrates how to seamlessly run the end-to-end census workload using the toolkit, without any external dependencies.

| Optimized for | Description
Expand All @@ -16,25 +17,27 @@ Intel® Distribution of Modin uses Ray to provide an effortless way to speed up
In this sample, you will use Intel® Distribution of Modin to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
Data transformation stage normalizes the income to the yearly inflation, balances the data such that each year has a similar number of data points, and extracts the features from the transformed dataset. The feature vectors are fed into the ridge regression model to predict the income of each sample.

Dataset is from IPUMS USA, University of Minnesota , [www.ipums.org](https://ipums.org/) (Steven Ruggles, Sarah Flood, Ronald Goeken, Josiah Grover, Erin Meyer, Jose Pacas and Matthew Sobek. IPUMS USA: Version 10.0 [dataset]. Minneapolis, MN: IPUMS, 2020. https://doi.org/10.18128/D010.V10.0)
Dataset is from IPUMS USA, University of Minnesota, [www.ipums.org](https://ipums.org/) (Steven Ruggles, Sarah Flood, Ronald Goeken, Josiah Grover, Erin Meyer, Jose Pacas and Matthew Sobek. IPUMS USA: Version 10.0 [dataset]. Minneapolis, MN: IPUMS, 2020. https://doi.org/10.18128/D010.V10.0)

## Key Implementation Details
This end-to-end workload sample code is implemented for CPU using the Python language. With the installation of Intel AI Analytics Toolkit, the conda environment is prepared with Python version 3.7, Intel® Distribution of Modin , Ray, Intel® oneAPI Data Analytics Library (oneDAL), Scikit-Learn, NumPy following which the sample code can be directly run using the underlying steps in this README.

## License

This code sample is licensed under MIT license
Code samples are licensed under the MIT license. See
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.

Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)

## Building Intel® Distribution of Modin and Intel® oneAPI Data Analytics Library (oneDAL) for CPU to build and run end-to-end workload

Intel® Distribution of Modin and Intel® oneAPI Data Analytics Library (oneDAL) is ready for use once you finish the Intel AI Analytics Toolkit installation with the Conda Package Manager.

You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi), and the Toolkit [Getting Started Guide for Linux](https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html) for installation steps and scripts.


### Activate conda environment With Root Access

Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and Intel® Distribution of Modin environment installation (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and [Intel® Distribution of Modin environment installation] (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.

Activate the conda environment with the following command:

Expand Down Expand Up @@ -73,7 +76,7 @@ pip install jupyter

### Install wget package

Install wget package in order to retrieve the Census dataset using HTTPS
Install wget package to retrieve the Census dataset using HTTPS

```
pip install wget
Expand All @@ -98,7 +101,7 @@ Open .ipynb file and run cells in Jupyter Notebook using the "Run" button. Alter

### Run as Python File

Open notebook in Jupyter and download as python file (see image using "census modin" sample)
Open notebook in Jupyter and download as python file (see the image using "census modin" sample)

![Download as python file in the Jupyter Notebook](Running_Jupyter_notebook_as_Python.jpg "Download as python file in the Jupyter Notebook")

Expand Down
Empty file.
Empty file.
Empty file.
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# `Intel Extension for PyTorch Getting Started` Sample

Intel Extension for PyTorch is a Python package to extend official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will not only contain functions, but also optimization (for example, take advantage of Intel's new hardware features).
Intel Extension for PyTorch is a Python package to extend the official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will contain functions and optimization (for example, take advantage of Intel's new hardware features).

For comprehensive instructions regarding Intel Extension for PyTorch, go to https://github.com/intel/intel-extension-for-pytorch.
For comprehensive instructions goto the github repo for [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).

| Optimized for | Description
|:--- |:---
Expand All @@ -15,25 +15,23 @@ For comprehensive instructions regarding Intel Extension for PyTorch, go to http

## Purpose

From this sample code, you will learn how to download, compile and get started with Intel Extension for PyTorch.
You will learn how to download, compile, and get started with Intel Extension for PyTorch from this sample code.

The code will be running on CPU.
The code will be running on the CPU.

## Key Implementation Details

The code includes Intel Extension for PyTorch and Auto-mixed-precision.

## License

This code sample is licensed under MIT license.
Code samples are licensed under the MIT license. See
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.

Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)

## Building the `Intel Extension for PyTorch Getting Started` Sample

### Running Samples In DevCloud

N/A

### On a Linux* System

Please follow instructions [here](https://github.com/intel/intel-extension-for-pytorch#installation).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# `Intel Extension for PyTorch Getting Started` Sample
# `Intel Extension for PyTorch Getting Started` Sample

torch-ccl holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library (oneCCL).

Intel® oneCCL (collective commnications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the oneCCL documentation.
Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training that implements such collectives like allreduce, allgather, and alltoall. For more information on oneCCL, please refer to the oneCCL documentation.

For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to https://github.com/intel/torch-ccl and https://github.com/intel/optimized-models/tree/master/pytorch/distributed.
For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to the following github repos:
* [PyTorchand CCL](https://github.com/intel/torch-ccl)
* [PyTorch](https://github.com/intel/optimized-models/tree/master/pytorch/distributed)

| Optimized for | Description
|:--- |:---
Expand All @@ -19,23 +21,21 @@ For comprehensive instructions regarding distributed training with oneCCL in PyT

From this sample code, you will learn how to perform distributed training with oneCCL in PyTorch.

The code will be running on CPU.
The code will be running on the CPU.

## Key Implementation Details

The code includes how to perform distributed training with oneCCL in PyTorch.

## License

This code sample is licensed under MIT license.
Code samples are licensed under the MIT license. See
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.

Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)

## Building the `torch-ccl Getting Started` Sample

### Running Samples In DevCloud

N/A

### On a Linux* System

Please follow instructions [here](https://github.com/intel/optimized-models/tree/master/pytorch/distributed#distributed-training-with-oneccl-in-pytorch).
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Intel Python daal4py Distributed K-Means
This sample code shows how to train and predict with a distributed k-means model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of MPI library installed and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
# `Intel Python daal4py Distributed K-Means` Samplw
This sample code shows how to train and predict with a distributed k-means model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the MPI library installed, and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).

| Optimized for | Description
| :--- | :---
Expand All @@ -11,9 +11,9 @@ This sample code shows how to train and predict with a distributed k-means model

## Purpose

daal4py is a simplified API to Intel® DAAL that allows for fast usage of the framework suited for Data Scientists or Machine Learning users. Built to help provide an abstraction to Intel® DAAL for either direct usage or integration into one's own framework.
daal4py is a simplified API to Intel® DAAL that allows for fast usage of the framework suited for Data Scientists or Machine Learning users. Built to help provide an abstraction to Intel® DAAL for direct usage or integration into one's own framework.

In this sample you will run a distributed K-Means model with oneDAL daal4py library memory objects. You will also learn how to train a model and save the information to a file.
In this sample, you will run a distributed K-Means model with oneDAL daal4py library memory objects. You will also learn how to train a model and save the information to a file.

## Key Implementation Details
This distributed K-means sample code is implemented for CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel(R) Distribution for Python as part of the [oneAPI AI Analytics Toolkit powered by oneAPI](https://software.intel.com/en-us/oneapi/ai-kit).
Expand All @@ -22,17 +22,20 @@ This distributed K-means sample code is implemented for CPU using the Python lan
You will need a working MPI library. We recommend to use Intel(R) MPI, which is included in the [oneAPI HPC Toolkit](https://software.intel.com/en-us/oneapi/hpc-kit).

## License
This code sample is licensed under MIT license
Code samples are licensed under the MIT license. See
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.

Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)

## Building daal4py for CPU

oneAPI Data Analytics Library is ready for use once you finish the Intel AI Analytics Toolkit installation, and have run the post installation script.
oneAPI Data Analytics Library is ready for use once you finish the Intel AI Analytics Toolkit installation and have run the post installation script.

You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation, and the Toolkit [Getting Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation and the Toolkit [Getting Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.

### Activate conda environment With Root Access

Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script. Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script. Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.

Intel Python environment will be active by default. However, if you activated another environment, you can return with the following command:

Expand Down Expand Up @@ -80,9 +83,9 @@ Run the Program

`mpirun -n 4 python ./IntelPython_daal4py_Distributed_Kmeans.py`

The output of the script will be saved in the included models and results directories.
The output of the script will be saved in the included models and result directories.

_Note: This code samples focuses on how to use daal4py to do distributed ML computations on chunks of data. The `mpirun` command above will only run on single local node. In order to launch on a cluster, you will need to create a host file on the master node among other steps. The **TensorFlow_Multinode_Training_with_Horovod** code sample explains this process well._
_Note: This code samples focus on using daal4py to do distributed ML computations on chunks of data. The `mpirun` command above will only run on a single local node. To launch on a cluster, you will need to create a host file on the master node, among other steps. The **TensorFlow_Multinode_Training_with_Horovod** code sample explains this process well._

##### Expected Printed Output (with similar numbers, printed 4 times):
```
Expand Down

This file was deleted.

Loading

0 comments on commit 0f993ff

Please sign in to comment.