Commit 357a1aed authored by Yann Mombrun's avatar Yann Mombrun
Browse files

Adding a descriptiong on the OpenAXES 1.0.0 content as part of the README file.

parent 393e2bec
......@@ -50,6 +50,7 @@ In order to install Open AXES on your system:
3) Wait between 10 minutes to 3 hours depending on the bandwidth of your Internet, the power of your computer and the number of activated options.
After the installation start OpenAXES in a terminal using the WebLab Platform Launcher:
> ./ start
......@@ -60,7 +61,8 @@ You can then access OpenAXES locally on http://localhost/openaxes using the logi
You can stop OpenAXES using the command:
> ./ stop
If you want to process new data, you have to move video files in the data/toIndex folder. The metadata could be provided aside.
The folder data/toIndex-sample contains some video samples with associated metadata that can be used as bootstrap samples.
The best is to do that prior to start the openaxes system to prevent from polling an unfinished file.
......@@ -69,8 +71,51 @@ The process might last around 5 times the duration of the video.
The Hawtio Console available at http://localhost:8282/hawtio (login/pass: weblab/weblab) can be used to monitor the process.
If you want to expose your video on a specific ip (instead of localhost),
edit conf/limas/ and URL_PREFIX line with "http://<your-ip>/" and restart jetty
> ./ jetty restart
Know issues and evolution requests at
\ No newline at end of file
Know issues and evolution requests at
The version 1.0.0 of the Open AXES is available for download at
Open AXES is an output of the AXES European project.
The goal AXES ( is to develop tools that provide various types of users with new engaging ways to interact with audiovisual libraries,
helping them to discover, browse, search and enrich video archives. Based on the existing OW2 WebLab ( integration platform for multimedia processing,
the "Open AXES" solution gathers innovative audiovisual content analyses technologies (shot and keyframes detection, image classification, speech transcription, large scale indexing, etc.)
as well as an ergonomic interface to navigate the video archive.
This version 1.0.0 contains:
- a folder gathering service, provided by Airbus Defence and Space, enabling to collect video files and metadata (in various format : json, properties, rdf)
in a set of relevant folders;
- a shot-detector and keyframe-extraction service provided by Technicolor and enabling to select some keyframes representative of each shot of the video
(and prevent image based components to analyse every single frame of a video or picking random ones) which is available from binaries and only allowed for non commercial use;
- a video-normaliser service, provided by Airbus Defence and Space and based on FFmpeg, enabling to convert the original video in appropriate formats for other components and UI);
- an image-classifier service, provided by the Katholieke Universiteit Leuven in charge of classifing selected images over a set of more than 1000 classes
which is available from binaries and allowed for non commercial use only;
- an automated speech recognition service, provided by Airbus Defence and Space and based on CMU Sphinx, extracting the text from English audio speech;
- a named entity detection service, provided by Airbus Defence and Space and based on GATE platform, extracting organisation, persons and locations from the text of the speeches.
- a text and metadata indexing service as well as the search and fuse search interfaces, named LIMAS and provided by the University of Twente,
saving the results of the processes and being the backend server for the UI, also in charge of calling the other search engines and fusing their result.
It is provided with an Apache 2 licence. The codebase is available at:;
- a near-duplicate image search service, provided by Airbus Defence and Space and based on the open source project Pastec;
- an on-the-fly visual analysis service, provided by the University of Oxford and enabling to search inside a database of images using words, without having a predefined set of classifiers.
It learns upon request a model by gathering positive examples from well-known Web search engines and looks up for similar images inside its own database. The codebase is available at with an Apache 2 licence.
- a thin client web interface provided by the Dublin City University enables to search inside text and metadata to get the indexed video,
to search for near duplicate images either from indexed video keyframes or for any user uploaded image and to search for images matching a visual concept
(either using a pre-trained or by learning on the fly the concept). It enables various operations over the videos like virtual cutting, collection management
and social sharing (like/dislike). The codebase is available at using an Apache 2 licence.
- two technical components are also needed to let Visor and Limas works together. Jcpuvisor is the Java interface to Visor.
It is available at and provided as open source using an Apache 2 licence. Imsearch-tools is available at and provided with an Apache 2 licence. Please note however that this later project might be enforcing Google Terms of Use
as well as authoring and image rights when gathering images on Google to send to Visor in order to learn its models.
All together are packaged into an OW2 WebLab application. When not specified, the licence that apply on the Open AXES code is the LGPL-V2.1.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment