Unverified Commit b8c44dd2 authored by Fabien Viale's avatar Fabien Viale Committed by GitHub
Browse files

Merge pull request #812 from fviale/master

Add tenant documentation
parents e7dbedc1 796b2e8f
......@@ -13,8 +13,8 @@ buildscript {
classpath 'de.undercouch:gradle-download-task:3.1.2'
classpath 'org.asciidoctor:asciidoctor-gradle-plugin:1.5.9.2'
classpath 'xalan:xalan:2.7.2'
classpath 'gradle.plugin.org.aim42:htmlSanityCheck:1.1.3'
classpath 'com.github.jk1:gradle-license-report:1.7'
classpath 'org.aim42.htmlSanityCheck:org.aim42.htmlSanityCheck.gradle.plugin:1.1.6'
classpath 'com.github.jk1:gradle-license-report:1.7'
}
}
......
......@@ -1599,6 +1599,7 @@ To get more information about the parameters of the service, please check the se
. You can now access the AutoFeat Page by clicking on the endpoint `AutoFeat` as shown in the image below.
.AutoFeat endpoint
image::AutoFeat_endpoint.png[align=center]
You will be redirected to AutoFeat page which initially contains three tabs that we describe in the following sections.
......@@ -1623,7 +1624,7 @@ image::AutoFeat_column_summaries.png[align=center]
=== Data Preprocessing
A preview of the data is displayed in the *Data Preprocessing* as follows.
[[_Data_Preprocessing]]
.Data Preprocessing
image::AutoFeat_edit_column_names_and_types.png["Data Preprocessing",align=center]
It is possible to change a column information. These changes can include:
......@@ -1638,7 +1639,7 @@ It is possible to change a column information. These changes can include:
- _Coding Method_: The encoding method used for converting the categorical data values into numerical values. The value is set to *Auto* by default. Thereafter, the best suited method for encoding the categorical feature is automatically identified. The data scientist still has the ability to override every decision and select another encoding method from the drop-down menu. Different methods are supported by AutoFeat such as *Label*, *OneHot*, *Dummy*, *Binary*, *Base N*, *Hash* and *Target*. Some of those methods require specifying additional encoding parameters. These parameters vary depending on the selected method (e.g., the base and the number of components for BaseN and Hash, respectively, and the target column for Target encoding method). Some of those values are set by default, if no values are specified by the user.
[[_Data_Preprocessing]]
.Data Preprocessing
image::AutoFeat_edit_column_names_and_types_encoding_parameters.png["Data Preprocessing",align=center]
It is also possible to perform the following actions on the dataset:
......@@ -1657,7 +1658,7 @@ This page displays the data encoding results based on the selected parameters. A
The user can also download the results as a csv file by clicking on the *Download* button.
[[_Encoded_data]]
.Encoded data
image::AutoFeat_encoded_data.png[align=center]
=== ML Pipeline Example
......@@ -1701,7 +1702,7 @@ More advanced search options (_highlighted in advanced search hints_) could be u
Now you can hit the search button to request jobs from the scheduler database according to the provided filter values. The search bar at the top shows a summary of the active search filters.
[[_JA-search-png]]
.JA-search
image::JA-search.png[align=center]
==== Execution Metrics
......@@ -1719,13 +1720,15 @@ image::JA-metrics.png[align=center]
Job Analytics includes three types of charts:
- *Job duration chart:* This chart shows durations per job. The x-axis shows the job ID and the y-axis shows the job duration. Hovering over the lines will also display the same information as a tooltip (see screenshot below). Using the duration chart will eventually help the users to identify any abnormal performance behaviour among several workflow executions.
[[_JA-duration]]
.JA duration
image::JA-duration.png[align=center]
- *Job variables chart:* This chart is intended to show all variable values of selected jobs. It represents the evolution chart for all numeric-only variables of the selected jobs. The chart provides the ability to hide or show specific input variables by clicking on the variable name in the legend, as shown in the figure below.
- *Job results chart:* This chart is intended to show all result values of selected jobs. It represents the evolution chart for all numeric-only results of the selected jobs. The chart provides also the ability to hide or show specific results by clicking on the variable name in the legend, as shown in the figure below.
.JA results chart
image::JA-chart.png[align=center]
All charts provide some advanced features such as "maximize" and "enlarge" to better visualize the results, and "move" to customize the dashboard layout (see top left side of charts). All of them provide the hovering feature as previously described and two types of charts to display: line and bar charts. Switching from one to the other can be activated through a toggle button located at the top right of the chart. Same for show/hide variables and results.
......@@ -1745,6 +1748,7 @@ We note also that clicking on the issue types and charts described in the previo
NOTE: It is important to notice that the dashboard layout and search preferences are saved in the browser cache so that users can have access to their last dashboard and search settings.
.JA table
image::JA-table.png[align=center]
== ProActive Jupyter Kernel
......@@ -1755,6 +1759,7 @@ scheduler and constructs tasks and workflows to execute them on the fly.
With this interface, users can run their code locally and test it using a native python kernel, and by a simple switch to
ProActive kernel, run it on remote public or private infrastructures without having to modify the code. See the example below:
.Direct execution from Jupyter with ActiveEon Kernel
image::direct_execution_from_jupyter.png[Direct execution from Jupyter with ActiveEon Kernel]
=== Installation
......@@ -2645,7 +2650,7 @@ NOTE: Instead of training a model from scratch, a pre-trained sentiment analysis
*Train_Image_Classification:* trains a model to classify images from ants and bees.
*Train_Image_Segmentation:* trains a segmentation model using SegNet network on http://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^].
*Train_Image_Segmentation:* trains a segmentation model using SegNet network on https://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^].
*Train_Image_Object_Detection:* trains objects using YOLOv3 model on COCO dataset proposed by Microsoft Research.
......@@ -2661,7 +2666,7 @@ This section presents custom AI workflows using tasks available on the `deep-lea
*Fake_Celebrity_Faces_Generation:* generates a wild diversity of fake faces using a GAN model that was trained based on thousands of real celebrity photos. The pre-trained GAN model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/Epoch+018.pt[link^].
*Image_Segmentation:* predicts a segmentation model using SegNet network on http://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^]. The pre-trained image segmentation model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/model_segnet.zip[link^].
*Image_Segmentation:* predicts a segmentation model using SegNet network on https://www.robots.ox.ac.uk/~vgg/data/pets/[Oxford-IIIT Pet Dataset^]. The pre-trained image segmentation model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/model_segnet.zip[link^].
*Image_Object_Detection:* detects objects using a pre-trained YOLOv3 model on COCO dataset proposed by Microsoft Research. The pre-trained model is available on this https://s3.eu-west-2.amazonaws.com/activeeon-public/models/yolo3_coco.zip[link^].
......@@ -4516,7 +4521,7 @@ NOTE: PyTorch is used to build the model architecture based on https://github.co
| Boolean (default=True)
|===
NOTE: The default parameters of the YOLO network were set for the COCO dataset (http://cocodataset.org/#home). If you'd like to use another dataset, you probably need to change the default parameters.
NOTE: The default parameters of the YOLO network were set for the COCO dataset (https://cocodataset.org/#home). If you'd like to use another dataset, you probably need to change the default parameters.
==== Text Classification
......
*ProActive Service Automation (PSA)* allows to automate service deployment, together with their life-cycle management. Services are instantiated by workflows (executed as a Job by the Scheduler), and related workflows allow to move instances from a state to another one.
*ProActive Service Automation (PSA)* allows automating service deployment, together with their life-cycle management. Services are instantiated by workflows (executed as a Job by the Scheduler), and related workflows allow to move instances from a state to another one.
At any point in time, each Service Instance has a specific State (RUNNING, ERROR, FINISHED, etc.).
Attached to each Service Instance, PSA service stores several information such as:
Attached to each Service Instance, PSA service stores some information such as:
Service Instance Id, Service Id, Service Instance State, the ordered list of Jobs executed for the Service, a set of variables with their values (a map that includes for instance the service endpoint), etc.
The link:https://try.activeeon.com/tutorials/basic_service_creation/basic_service_creation.html[basic service creation tutorial, window="_blank"] and link:https://try.activeeon.com/tutorials/advanced_service_creation/advanced_service_creation.html[advanced service creation tutorial, window="_blank"] on link:https://try.activeeon.com[try.activeeon.com, window="_blank"]
The link:https://try.activeeon.com/tutorials/clearwater/clearwater.html[Create your own service tutorial, window="_blank"] on link:https://try.activeeon.com[try.activeeon.com, window="_blank"]
help you to build your own services.
\ No newline at end of file
......@@ -492,7 +492,7 @@ The service requires the following variables as input:
=== Storm
This service allows to deploy through ProActive Service Automation (PSA) Portal a cluster of Apache Storm stream processing system (http://storm.apache.org).
This service allows to deploy through ProActive Service Automation (PSA) Portal a cluster of Apache Storm stream processing system (https://storm.apache.org).
The service is started using the following variables.
*Variables:*
......
This diff is collapsed.
......@@ -406,6 +406,15 @@ pa.scheduler.core.defaultloginfilename=config/authentication/login.cfg
# else, the path is absolute, so the path is directly interpreted
pa.scheduler.core.defaultgroupfilename=config/authentication/group.cfg
# Tenant file name for file authentication method
# If this file path is relative, the path is evaluated from the Scheduler dir (ie application's root dir)
# with the variable defined below : pa.scheduler.home.
# else, the path is absolute, so the path is directly interpreted
pa.scheduler.core.defaulttenantfilename=config/authentication/tenant.cfg
# If enabled, filter jobs according to user tenant
pa.scheduler.core.tenant.filter=false
# Property that define the method that have to be used for logging users to the Scheduler
# It can be one of the following values:
# - "SchedulerFileLoginMethod" to use file login and group management
......@@ -479,7 +488,7 @@ pa.scheduler.db.fetch.batch_size=50
pa.scheduler.global.variables.configuration=config/scheduler/global_variables.xml
# refresh period, in minutes, for the global variables configuration
pa.scheduler.global.variables.refresh=10
pa.scheduler.global.variables.refresh=5
#-------------------------------------------------------
#---------- EMAIL NOTIFICATION PROPERTIES ------------
......
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment