Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,10 +232,10 @@ After Jupiter server starts, it will display a security token that should be use
* You should do the step described above only once, and thus save the info in the untracked .env file.
* Although setting up Service principal is optional, using the default values for Service Principal information in the sensitive info dictionary is __NOT__ optional. The get_auth() utility function defined in the notebook and saved in o16n_regular_ML_R_models_utils.py checks if Service Principal password has to default value ("YOUR_SERVICE_PRINCIPAL_PASSWORD") to switch to interactive Azure login.

* `010_RegularR_RealTime_test_score_in_docker.ipynb`: __optional__ notebook that shows how the AML SDK experimentation framework can be used to develop generic (i.e. not necesarily for ML training) containerized scripts. Significant steps:
* `010_RegularR_RealTime_test_score_in_docker.ipynb`: __optional__ notebook that shows how the AML SDK experimentation framework can be used to develop generic (i.e. not necessarily for ML training) containerized scripts. Significant steps:
* R model registration. Even if AML SDK is focused on python, any file can be registered and versioned. We __can__ access registered models even when using the AML SDK experimentation framework.
* We use an already provisioned VM as an AML SDK un-managed [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets). As mentioned above, it is possible to use the same Linux VM as both a compute target machine and orchestration machine.
* the script we develop is written an python. It creates an R session in function init() (which will be invoked once at o16n time in next notebook, when service is deployed) and loads the R model in R session memory. The R session is used for scoring in run() function. Besides R model loading at service creation time and regular scoring of new data, the scripts also report processing and data passing (python to R adn back) times. Results show the data passing times are relatively constant as a function of data size (10 ms) so the data transfer overhead is minimal especially for large data sets.
* the script we develop is written an python. It creates an R session in function init() (which will be invoked once at o16n time in next notebook, when service is deployed) and loads the R model in R session memory. The R session is used for scoring in run() function. Besides R model loading at service creation time and regular scoring of new data, the scripts also report processing and data passing (python to R and back) times. Results show the data passing times are relatively constant as a function of data size (10 ms) so the data transfer overhead is minimal especially for large data sets.


* `020_RegularR_RealTime_deploi_ACI_AKS.ipynb`: deploys the above o16n script and R model on an ACI and AKS. It creates an o16n image which can be pulled from its ACR and tested locally if needed using the SDK or the portal. The notebook also provisions and ACI for remote testing and finally and AKS cluster where the o16n image can be used to score data in real time (i.e. not batch processing) at scale.
Expand Down