Unverified Commit b4370082 authored by alijawadfahs's avatar alijawadfahs Committed by GitHub
Browse files

Merge branch 'master' into Openstack-flavor-id

parents c22d4cbe 9ec49868
......@@ -13,8 +13,8 @@ buildscript {
classpath 'de.undercouch:gradle-download-task:3.1.2'
classpath 'org.asciidoctor:asciidoctor-gradle-plugin:1.5.9.2'
classpath 'xalan:xalan:2.7.2'
classpath 'gradle.plugin.org.aim42:htmlSanityCheck:1.1.3'
classpath 'com.github.jk1:gradle-license-report:1.7'
classpath 'org.aim42.htmlSanityCheck:org.aim42.htmlSanityCheck.gradle.plugin:1.1.6'
classpath 'com.github.jk1:gradle-license-report:1.7'
}
}
......@@ -44,6 +44,7 @@ asciidoctor {
resources {
from("$projectDir/src/docs/") {
include 'user/examples/**'
include 'admin/references/kubernetes/**'
include 'images/**'
include 'tocbot/**'
include 'highlight/**'
......
......@@ -342,6 +342,18 @@ The following workflows represent some mathematical functions that can be optimi
image::Himmelblau_Function.png[448,336,align=center]
https://al-roomi.org/benchmarks/unconstrained/2-dimensions/56-himmelblau-s-function[Mathematical Expression]
image::himmelblau_math.png[948,736,align=center]
*Kursawe_Multiobjective_Function:* is a multiobjective function proposed by http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.8050[Frank Kursawe]. It has two objectives (f1, f2) to minimize. For more info, please click https://deap.readthedocs.io/en/master/api/benchmarks.html#deap.benchmarks.kursawe[here].
image::Kursawe_Multiobjective_Function.png[648,536,align=center]
https://al-roomi.org/benchmarks/multi-objective/unconstrained-list/322-kursawe-s-function-kur[Mathematical Expression]
image::kursawe_math.png[548,436,align=center]
=== Hyperparameter Optimization
The following workflows represent some machine learning and deep learning algorithms that can be optimized.
......@@ -391,8 +403,9 @@ The following workflows have common variables with the above illustrated workflo
The following workflows contain a search space containing a set of possible neural networks architectures that can be used by `Distributed_Auto_ML` to automatically find the best combinations of neural architectures within the search space.
*Handwritten_Digit_Classification:* trains a simple deep CNN on the MNIST dataset using the PyTorch library. This example allows to search for two types of neural architectures defined in the Handwritten_Digit_Classification_Search_Space.json file.
*Single_Handwritten_Digit_Classification:* trains a simple deep CNN on the MNIST dataset using the PyTorch library. This example allows to search for two types of neural architectures defined in the Handwritten_Digit_Classification_Search_Space.json file.
*Multiple_Objective_Handwritten_Digit_Classification:* trains a simple deep CNN on the MNIST dataset using the PyTorch library. This example allows optimizing multiple losses, such as accuracy, number of parameters, and memory access cost (MAC) measure.
=== Distributed Training
......@@ -1559,37 +1572,47 @@ AutoFeat currently supports the following encoding methods:
- Label: converts each value in a categorical feature into an integer value between 0 and n-1, where n is the number of distinct categories of the variable.
- Binary: stores categories as binary bitstrings.
- OneHot: creates a new feature for each category in the Categorical Variable and replaces it with either 1 (presence of the feature) or 0 (absence of the feature). The number of the new features depends on the number of categories in the Categorical Variable.
- OneHot: creates a new feature for each category in the categorical variable and replaces it with either 1 (presence of the feature) or 0 (absence of the feature). The number of the new features depends on the number of categories in the categorical variable.
- Dummy: transforms the categorical variable into a set of binary variables (also known as dummy variables). The dummy encoding is a small improvement over the one-hot-encoding, such it uses n-1 features to represent n categories.
- BaseN: encodes the categories into arrays of their base-n representation. A base of 1 is equivalent to one-hot encoding and a base of 2 is equivalent to binary encoding.
- Target: replaces a categorical value with the mean of the target variable.
- Hash: maps each category to an integer within a pre-determined range n_components. n_components is the number of dimensions, in other words, the number of bits to use to represent the feature. We use 8 bits by default .
- Hash: maps each category to an integer within a pre-determined range n_components. n_components is the number of dimensions, in other words, the number of bits to use to represent the feature. We use 8 bits by default.
NOTE: The most of these methods are implemented using the python link:https://contrib.scikit-learn.org/category_encoders/[Category Encoders] library. Examples can be found in the https://www.kaggle.com/code/discdiver/category-encoders-examples/notebook[Category Encoders Examples] notebook .
The most of these methods are implemented using the python link:https://contrib.scikit-learn.org/category_encoders/[Category Encoders] library.
As we already mentioned, the performance of ML algorithms depends on how categorical variables are encoded. The results produced by the model vary depending on the used encoding technique. Thus, the hardest part of categorical encoding can sometimes be finding the right categorical encoding method.
There are numerous research papers and studies dedicated to the analysis of the performance of categorical encoding approaches applied to different datasets. Based on the common factors shared by the datasets using the same encoding method, we have implemented an algorithm for finding the best suited method for your data.
To access the AutoFeat page, please follow the steps below:
Open the link:https://try.activeeon.com/automation-dashboard/#/portal/workflow-execution[Workflow Execution Portal].
. Open the link:https://try.activeeon.com/studio[Studio Portal].
. Create a new workflow.
Click on the button *Submit a Job* and then search for *Import_Data_And_Automate_Feature_Engineering* workflow as described in the image below.
. Drag and drop the `Import_Data_And_Automate_Feature_Engineering` task from the *machine-learning* bucket in the ProActive Machine Learning.
image::Import_Data_And_Automate_Feature_Engineerin_Search.png[align=center]
. Click on the task and click `General Parameters` in the left to change the default parameters of this task.
Put in *FILE_URL* variable the S3 link to upload your dataset.
image::Import_Data_And_Automate_Feature_Engineering_Task.png[align=center]
Set the other parameters according to your dataset format.
[start=5]
. Put in *FILE_PATH* variable the S3 link to upload your dataset.
Click on the *Submit* button to start AutoFeat.
. Set the other parameters according to your dataset format.
. Click on the *Execute* button to run the workflow and start AutoFeat.
image::Import_Data_And_Automate_Feature_Engineering_Execute.png[align=center]
To get more information about the parameters of the service, please check the section <<Import_Data_And_Automate_Feature_Engineering>>.
image::Import_Data_And_Automate_Feature_Engineering_Submit.png[align=center]
[start=8]
. Open the link:https://try.activeeon.com/automation-dashboard/#/portal/workflow-execution[Workflow Execution Portal].
You can now access the AutoFeat Page by clicking on the endpoint `AutoFeat` as shown in the image below.
. You can now access the AutoFeat Page by clicking on the endpoint `AutoFeat` as shown in the image below.
[[_AutoFeat_endpoint]]
image::AutoFeat_endpoint.png[align=center]
You will be redirected to AutoFeat page which initially contains three tabs that we describe in the following sections.
......@@ -1611,11 +1634,11 @@ AutoFeat also creates some summary statistics for each column. A table is displa
[[_Column_summaries]]
image::AutoFeat_column_summaries.png[align=center]
=== Edit column names and types
A preview of the data is displayed in the *Edit Column Names and Types* as follows.
=== Data Preprocessing
A preview of the data is displayed in the *Data Preprocessing* as follows.
[[_Edit_column_names_and_types]]
image::AutoFeat_edit_column_names_and_types.png["Edit column names and types",align=center]
[[_Data_Preprocessing]]
image::AutoFeat_edit_column_names_and_types.png["Data Preprocessing",align=center]
It is possible to change a column information. These changes can include:
......@@ -1625,12 +1648,12 @@ It is possible to change a column information. These changes can include:
- _Category Type_: Categorical variables can be divided into two categories; *Ordinal* such the categories have an inherent order and *Nominal* if the categories do not have any inherent order.
- _Label_: Check this checkbox to select the label column.
- _Label Column_: Only one column can be selected as the label column.
- _Coding Method_: The encoding method used for converting the categorical data values into numerical values. The value is set to *Auto* by default. Thereafter, the best suited method for encoding the categorical feature is automatically identified. The data scientist still has the ability to override every decision and select another encoding method from the drop-down menu. Different methods are supported by AutoFeat such as *Label*, *OneHot*, *Dummy*, *Binary*, *Base N*, *Hash* and *Target*. Some of those methods require specifying additional encoding parameters. These parameters vary depending on the selected method (e.g., the base and the number of components for BaseN and Hash, respectively, and the target column for Target encoding method). Some of those values are set by default, if no values are specified by the user.
[[_Edit_column_names_and_types]]
image::AutoFeat_edit_column_names_and_types_encoding_parameters.png["Edit column names and types",align=center]
[[_Data_Preprocessing]]
image::AutoFeat_edit_column_names_and_types_encoding_parameters.png["Data Preprocessing",align=center]
It is also possible to perform the following actions on the dataset:
......@@ -1638,7 +1661,7 @@ It is also possible to perform the following actions on the dataset:
- *Restore*, to restore the original version of the dataset loaded from the external source.
- *Delete Column*, to delete a column from the dataset.
- *Preview Encoded Data*, to display the encoding results in a new tab.
- *Cancel*, to discard any changes the user may have made and finish the workflow execution.
- *Cancel and Quit*, to discard any changes the user may have made and finish the workflow execution.
Once the encoding parameters are set, the user can proceed to display the encoded dataset by clicking on the *Preview Encoded Data*. He can also check and compare different encoding methods and/or parameters based on the obtained results.
......@@ -1651,6 +1674,15 @@ The user can also download the results as a csv file by clicking on the *Downloa
[[_Encoded_data]]
image::AutoFeat_encoded_data.png[align=center]
=== ML Pipeline Example
You can connect different tasks in a single workflow to get the full pipeline from data preprocessing to model training and deployment. Each task will propagate the acquired variables to its children tasks.
The following workflow example `Vehicle_Type_Using_Model_Explainability` uses the `Import_Data_And_Automate_Feature_Engineering` task to prepare the data. It is available on the `machine_learning_workflows` bucket.
image::Vehicle_Type_Using_Model_Explainability.png[align=center]
This workflow predicts vehicle type based on silhouette measurements, and apply ELI5 and Kernel Explainer to understand the model’s global behavior or specific predictions.
== ProActive Analytics
The *ProActive Analytics* is a dashboard that provides an overview of executed workflows
......@@ -1683,7 +1715,7 @@ More advanced search options (_highlighted in advanced search hints_) could be u
Now you can hit the search button to request jobs from the scheduler database according to the provided filter values. The search bar at the top shows a summary of the active search filters.
[[_JA-search-png]]
.JA-search
image::JA-search.png[align=center]
==== Execution Metrics
......@@ -1701,13 +1733,15 @@ image::JA-metrics.png[align=center]
Job Analytics includes three types of charts:
- *Job duration chart:* This chart shows durations per job. The x-axis shows the job ID and the y-axis shows the job duration. Hovering over the lines will also display the same information as a tooltip (see screenshot below). Using the duration chart will eventually help the users to identify any abnormal performance behaviour among several workflow executions.
[[_JA-duration]]
.JA duration
image::JA-duration.png[align=center]
- *Job variables chart:* This chart is intended to show all variable values of selected jobs. It represents the evolution chart for all numeric-only variables of the selected jobs. The chart provides the ability to hide or show specific input variables by clicking on the variable name in the legend, as shown in the figure below.
- *Job results chart:* This chart is intended to show all result values of selected jobs. It represents the evolution chart for all numeric-only results of the selected jobs. The chart provides also the ability to hide or show specific results by clicking on the variable name in the legend, as shown in the figure below.
.JA results chart
image::JA-chart.png[align=center]
All charts provide some advanced features such as "maximize" and "enlarge" to better visualize the results, and "move" to customize the dashboard layout (see top left side of charts). All of them provide the hovering feature as previously described and two types of charts to display: line and bar charts. Switching from one to the other can be activated through a toggle button located at the top right of the chart. Same for show/hide variables and results.
......@@ -1727,6 +1761,7 @@ We note also that clicking on the issue types and charts described in the previo
NOTE: It is important to notice that the dashboard layout and search preferences are saved in the browser cache so that users can have access to their last dashboard and search settings.
.JA table
image::JA-table.png[align=center]
== ProActive Jupyter Kernel
......@@ -1737,6 +1772,7 @@ scheduler and constructs tasks and workflows to execute them on the fly.
With this interface, users can run their code locally and test it using a native python kernel, and by a simple switch to
ProActive kernel, run it on remote public or private infrastructures without having to modify the code. See the example below:
.Direct execution from Jupyter with ActiveEon Kernel
image::direct_execution_from_jupyter.png[Direct execution from Jupyter with ActiveEon Kernel]
=== Installation
......@@ -2627,7 +2663,7 @@ NOTE: Instead of training a model from scratch, a pre-trained sentiment analysis
*Train_Image_Classification:* trains a model to classify images from ants and bees.
*Train_Image_Segmentation:* trains a segmentation model using SegNet network on http://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^].
*Train_Image_Segmentation:* trains a segmentation model using SegNet network on https://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^].
*Train_Image_Object_Detection:* trains objects using YOLOv3 model on COCO dataset proposed by Microsoft Research.
......@@ -2643,7 +2679,7 @@ This section presents custom AI workflows using tasks available on the `deep-lea
*Fake_Celebrity_Faces_Generation:* generates a wild diversity of fake faces using a GAN model that was trained based on thousands of real celebrity photos. The pre-trained GAN model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/Epoch+018.pt[link^].
*Image_Segmentation:* predicts a segmentation model using SegNet network on http://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^]. The pre-trained image segmentation model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/model_segnet.zip[link^].
*Image_Segmentation:* predicts a segmentation model using SegNet network on https://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^]. The pre-trained image segmentation model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/model_segnet.zip[link^].
*Image_Object_Detection:* detects objects using a pre-trained YOLOv3 model on COCO dataset proposed by Microsoft Research. The pre-trained model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/yolo3_coco.zip[link^].
......@@ -2894,7 +2930,7 @@ It also enables:
This workflow can be used:
- Stand-alone such that the results can be saved in the User Data Space or locally.
- In a larger workflow where the results will be sent to the next connected task.
- In a ML pipeline where the results will be transferred as an input for the following task in the pipeline.
NOTE: For further information, please check the subsection <<AutoFeat>>.
......@@ -2903,7 +2939,7 @@ NOTE: For further information, please check the subsection <<AutoFeat>>.
[cols="2,5,2"]
|===
| *Variable name* | *Description* | *Type*
3+^|*Workflow variables*
3+^|*Task variables*
| `IMPORT_FROM`
| Selects the method/protocol to import the data source.
| List [PA:URL,PA:URI,PA:USER_FILE,PA:GLOBAL_FILE] (default=PA:URL)
......@@ -2914,7 +2950,7 @@ NOTE: For further information, please check the subsection <<AutoFeat>>.
| Defines a delimiter to use.
| String (default=;)
| `LIMIT_OUTPUT_VIEW`
| Specifies how many rows of the dataframe will be previewed in the browser to check the encoding results.
| Specifies how many rows of the encoded dataframe will be previewed in the workflow results.
| Int (-1 means preview all the rows)
|===
......@@ -4498,7 +4534,7 @@ NOTE: PyTorch is used to build the model architecture based on https://github.co
| Boolean (default=True)
|===
NOTE: The default parameters of the YOLO network were set for the COCO dataset (http://cocodataset.org/#home). If you'd like to use another dataset, you probably need to change the default parameters.
NOTE: The default parameters of the YOLO network were set for the COCO dataset (https://cocodataset.org/#home). If you'd like to use another dataset, you probably need to change the default parameters.
==== Text Classification
......
*ProActive Service Automation (PSA)* allows to automate service deployment, together with their life-cycle management. Services are instantiated by workflows (executed as a Job by the Scheduler), and related workflows allow to move instances from a state to another one.
*ProActive Service Automation (PSA)* allows automating service deployment, together with their life-cycle management. Services are instantiated by workflows (executed as a Job by the Scheduler), and related workflows allow to move instances from a state to another one.
At any point in time, each Service Instance has a specific State (RUNNING, ERROR, FINISHED, etc.).
Attached to each Service Instance, PSA service stores several information such as:
Attached to each Service Instance, PSA service stores some information such as:
Service Instance Id, Service Id, Service Instance State, the ordered list of Jobs executed for the Service, a set of variables with their values (a map that includes for instance the service endpoint), etc.
The link:https://try.activeeon.com/tutorials/basic_service_creation/basic_service_creation.html[basic service creation tutorial, window="_blank"] and link:https://try.activeeon.com/tutorials/advanced_service_creation/advanced_service_creation.html[advanced service creation tutorial, window="_blank"] on link:https://try.activeeon.com[try.activeeon.com, window="_blank"]
The link:https://try.activeeon.com/tutorials/clearwater/clearwater.html[Create your own service tutorial, window="_blank"] on link:https://try.activeeon.com[try.activeeon.com, window="_blank"]
help you to build your own services.
\ No newline at end of file
......@@ -492,7 +492,7 @@ The service requires the following variables as input:
=== Storm
This service allows to deploy through ProActive Service Automation (PSA) Portal a cluster of Apache Storm stream processing system (http://storm.apache.org).
This service allows to deploy through ProActive Service Automation (PSA) Portal a cluster of Apache Storm stream processing system (https://storm.apache.org).
The service is started using the following variables.
*Variables:*
......
......@@ -39,6 +39,4 @@ This user can only access to the following buckets list:
* All buckets that the user created.
* All buckets belonging to the _interns_ group (GROUP:interns).
* All _public_ buckets (GROUP:public-objects).
* All _public_ buckets (GROUP:public-objects).
\ No newline at end of file
This diff is collapsed.
......@@ -93,7 +93,7 @@ This infrastructure needs 10 arguments, described hereafter:
*Full name:* `org.ow2.proactive.resourcemanager.nodesource.infrastructure.SSHInfrastructureV2`
This infrastructure allows to deploy nodes over SSH.
This infrastructure needs 13 arguments, described hereafter:
This infrastructure needs 17 arguments, described hereafter:
- **hostsList** - Path to a file containing the hosts on which
resources should be acquired. This file should contain one host per
......@@ -133,6 +133,50 @@ This infrastructure needs 13 arguments, described hereafter:
- **javaOptions** - Java options appended to the command used to
start the node on the remote host.
- **deploymentMode** - Specifies how the ProActive node command is started.
The deploymentMode can take the following values:
*** _autoGenerated_: when this mode is selected the command to start the
ProActive node will be generated automatically. As a result, the ssh call to
the nodes will start the ProActive nodes on the infrastructure without modifications to
the startup command. This mode is selected by default.
*** _useStartupScript_: starts the ProActive node using the script in the variable
`%startupScriptStandard%`, allowing the user to modify the startup command of the hosts.
This mode uses the ProActive node agent identified in the `%schedulingPath%` variable
and the Java Runtime Environment identified in the `%javaPath%` variable.
*** _useNodeJarStartupScript_: enables connecting to the SSHInfrastructureV2
by launching _node.jar_. This mode uses `%nodeJarUrl%` and `%startupScriptWitNodeJarDownload%`
variables to generate the startup command.
+
TIP: if deploymentMode field was set to an empty string, the `autoGenerated` mode will be automatically selected.
+
- **nodeJarUrl** - The full URL path to download the ProActive _node.jar_
on each host added to the hostsList. The URL have to be accessible from the hosts.
For example, `try.activeeon.com/rest/node.jar`.
Used only when `useNodeJarStartupScript` is selected.
- **startupScriptStandard** - Nodes startup script to launch the ProActive nodes using a ProActive node agent.
The script by default locates Java Runtime Environment and node agent directory using `%javaPath%` and `%schedulingPath%` variables respectively.
The user can modify or extend this script to execute commands on the host before or after the Proactive node startup.
Used only when `useStartupScript` is selected.
- **startupScriptWitNodeJarDownload** - Nodes startup script to launch the ProActive nodes using _node.jar_.
To run ProActive nodes, this script is also expected to download and install the Java Runtime Environment,
then download and execute ProActive _node.jar_, if they are not already provided by the host.
It uses `%nodeJarUrl%` variable to get the full URL path for downloading the _node.jar_ file.
The user can modify or extend this script to execute commands on the host before or after the ProActive node startup.
Used only when `useNodeJarStartupScript` is selected.
+
WARNING: If the deployment mode `autoGenerated` is selected, the startup scripts will be disregarded.
+
==== CLI Infrastructure
*Full name:* `org.ow2.proactive.resourcemanager.nodesource.infrastructure.CLIInfrastructure`
......@@ -318,6 +362,8 @@ However, it implements a different instance management strategy that reduces the
2. The nodes share the same networking infrastructure through a common Virtual Private Cloud (VPC).
The infrastructure supports networking autoconfiguration if no parameter is supplied.
WARNING: The node source using empty policy will not benefit from this latter management strategy. The deployment with empty policy doesn't use the shared instance template and networking configuration.
===== Pre-Requisites
The configuration of the AWS Autoscaling infrastructure is subjected to several requirements.
......@@ -372,7 +418,7 @@ The configuration form exposes the following fields:
- *defaultVpcId:* This parameter can be filled with the ID of the VPC to use to operate instance operating nodes.
If specified, this parameter has to refer to an existing VPC in the region and comply with the VPC ID format.
If left blank, the connector will trigger networking autoconfiguration.
If left blank, the connector will, first, try to get the default VPC ID in the specified region if set, otherwise it will trigger networking autoconfiguration.
- *defaultSubNetId:* The administrator can define which subnet has to be attached to the the instance supporting nodes.
If specified, this parameter has to refer to an existing subnet in the region affected to the specified VPC, and has to comply with the subnet ID format.
......@@ -384,7 +430,7 @@ WARNING: Please do not trigger networking autoconfiguration if you operate ProAc
Otherwise, a new and distinct VPC will be used to operate the nodes created by the NodeSource, preventing their communication with the Resource Manager.
- *defaultSecurityGroup:* This parameter receives the ID of the security group to spawn instances into.
If this parameter does not meet the requirement regarding the providing the provided VPC and subnet, a new security group will be generated.
If this parameter does not meet the requirement regarding the provided VPC and subnet, a new security group will be generated by default and will be re-used if the same deployment scenario is repeated.
This parameter is mandatory, and has to comply with the format of the ID of the AWS security groups.
- *region:* The administrator specifies here the AWS region to allocate the cluster into.
......@@ -813,15 +859,23 @@ You can opt to place this file in `$PROACTIVE_HOME/config/authentication/azure.c
+
|===
|https://github.com/Azure/azure-libraries-for-java/blob/master/azure-mgmt-compute/src/main/java/com/microsoft/azure/management/compute/KnownLinuxVirtualMachineImage.java[Linux] |https://github.com/Azure/azure-libraries-for-java/blob/master/azure-mgmt-compute/src/main/java/com/microsoft/azure/management/compute/KnownWindowsVirtualMachineImage.java[Windows]
|Linux https://github.com/Azure/azure-libraries-for-java/blob/master/azure-mgmt-compute/src/main/java/com/microsoft/azure/management/compute/KnownLinuxVirtualMachineImage.java[++[++link to source++]++] |Windows https://github.com/Azure/azure-libraries-for-java/blob/master/azure-mgmt-compute/src/main/java/com/microsoft/azure/management/compute/KnownWindowsVirtualMachineImage.java[++[++link to source++]++]
|UBUNTU_SERVER_14_04_LTS +
UBUNTU_SERVER_16_04_LTS +
DEBIAN_8 +*+ _default value_ +*+ +
CENTOS_7_2
|WINDOWS_SERVER_2008_R2_SP1 +
WINDOWS_SERVER_2012_DATACENTER +
UBUNTU_SERVER_18_04_LTS +
DEBIAN_9 +*+ _default value_ +*+ +
DEBIAN_10 +
CENTOS_8_1 +
OPENSUSE_LEAP_15_1 +
SLES_15_SP1 +
REDHAT_RHEL_8_2 +
ORACLE_LINUX_8_1
|WINDOWS_DESKTOP_10_20H1_PRO +
WINDOWS_SERVER_2019_DATACENTER +
WINDOWS_SERVER_2019_DATACENTER_WITH_CONTAINERS +
WINDOWS_SERVER_2016_DATACENTER +
WINDOWS_SERVER_2012_R2_DATACENTER
|===
......@@ -890,13 +944,14 @@ Note that this user custom script will be run as root/admin user.
The following fields are the optional parameters of the Azure Billing Configuration section. The aim of this section is to configure the automatic cloud cost estimator. It is done by considering all the Azure resources related to your reservation (virtual machines, disks,..). This mechanism relies on the Azure Resource Usage and RateCard APIs (https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/usage-rate-card-overview).
- *resourceUsageRefreshFreqInMin:* Periodical resource usage retrieving delay in min. The default value is 30.
- *rateCardRefreshFreqInMin:* Periodical rate card retrieving delay in min. The default value is 30.
- *offerId:* The Offer ID parameter consists of the "MS-AZR-" prefix, plus the Offer ID number. The default value is "MS-AZR-0003p" (Pay-As-You-Go offer).
- *currency:* The currency in which the resource rates need to be provided. The default value is "USD".
- *locale:* The culture in which the resource metadata needs to be localized. The default value is "en-US".
- *regionInfo:* The 2 letter ISO code where the offer was purchased. The default value is "US".
- *maxBudget:* Your max budget for the Azure resources related to the node source. Also used to compute your global cost in % of your budget. The default value is 50.
- *enableBilling:* Enable billing information (_true_/_false_). If _true_, the following parameters will be considered. The default value is _false_.
- *resourceUsageRefreshFreqInMin:* Periodical resource usage retrieving delay in min. The default value is _30_.
- *rateCardRefreshFreqInMin:* Periodical rate card retrieving delay in min. The default value is _30_.
- *offerId:* The Offer ID parameter consists of the "MS-AZR-" prefix, plus the Offer ID number. The default value is _MS-AZR-0003p_ (Pay-As-You-Go offer).
- *currency:* The currency in which the resource rates need to be provided. The default value is _USD_.
- *locale:* The culture in which the resource metadata needs to be localized. The default value is _en-US_.
- *regionInfo:* The 2 letter ISO code where the offer was purchased. The default value is _US_.
- *maxBudget:* Your max budget for the Azure resources related to the node source. Also used to compute your global cost in % of your budget. The default value is _50_.
As you can see, these parameters provide a lot of flexibility to configure your infrastructure. When creating your Azure Scale Set node source, the infrastructure should be coupled with a Dynamic Policy. This Policy will additionally define scalability parameters such as limits on the number of deployed nodes or the minimum idle time before a node can be deleted (to optimize node utilization).
......
apiVersion: v1
data:
# Public Kubernetes Cluster IP
HOST_ADDRESS: 127.0.0.1
# Protocol http or https to use to access ProActive web portals (default: http)
PROTOCOL: http
# Port to use to access ProActive web portals (default: 8080)
PORT: "8080"
# Port to use for PARM communication (default: 33647)
PAMR_PORT: "33647"
# DB used by ProActive (default: HSQLDB)
DB_TYPE: default
# ProActive DB credentials
DB_CATALOG_PASS: changeme
DB_NOTIFICATION_PASS: changeme
DB_PCA_PASS: changeme
DB_RM_PASS: changeme
DB_SCHEDULER_PASS: changeme
# Static Node Source Name
STATIC_NS_NAME: Local-Linux-Nodes
# Number of Static ProActive Nodes to start (default: 4)
STATIC_NS_WORKER_NODES: "4"
# Set up a Dynamic Kubernetes Node Source (default: false)
DYNAMIC_NS: "false"
# Dynamic Node Source Name
DYNAMIC_NS_NAME: Dynamic-Kubernetes-Nodes
# Minimum Dynamic Kubernetes Nodes (default: 0)
DYNAMIC_NS_MIN_NODES: "0"
# Maximun Dynamic Kubernetes Nodes (default: 15)
DYNAMIC_NS_MAX_NODES: "15"
# ProActive Admin Password
PROACTIVE_ADMIN_PASSWORD: changeme
# User starting the ProActive server (default: activeeon/activeeon)
UID: "1000"
GID: "1000"
USER_NAME: activeeon
GROUP_NAME: activeeon
# Number of days before jobs cleanup (default: 30)
JOB_CLEANUP_DAYS: "30"
kind: ConfigMap
metadata:
name: env-config-g8cf4cd4d8
---
apiVersion: v1
data:
kube.config: |
<Cluster config>
kind: Secret
metadata:
name: cluster-config-2cd457dbmc
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: proactive-scheduler-service
spec:
ports:
- name: pamr
port: 33647
protocol: TCP
selector:
app: proactive-scheduler
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: proactive-scheduler-service-web
spec:
externalIPs:
- <Cluster IP>
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: proactive-scheduler
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: default-node-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /opt/proactive/node/default
storageClassName: node-default
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: default-scheduler-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /opt/proactive/server/default
storageClassName: scheduler-default
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: previous-node-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /opt/proactive/node/previous
storageClassName: node-previous
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: previous-scheduler-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /opt/proactive/server/previous
storageClassName: scheduler-previous
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: default-node-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: node-default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: default-scheduler-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: scheduler-default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: previous-node-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
<