Commit bbc3c3d9 authored by Sebastian Miller's avatar Sebastian Miller
Browse files

updated model_research README; another requests test CI attempt

parent 48246d43
......@@ -53,7 +53,9 @@ build:ensembler:
integration-tests:ensembler:
stage: integration-tests
image: docker/compose:1.29.2
image:
name: docker/compose:1.29.2
entrypoint: [""]
#image: python:3.8-slim-buster
only:
- integration
......
# Forecasters Ensemblator
todo jakieś summary projektu
# Ensembling model research
This directory contains files associated with the ensembling model development process. The model is based on reinforcement learning.
### Content List
+ [1. Installation](#installation)
+ [1.1. Cloning Repository](#cloning)
+ [1.2. Venv](#venv)
+ [2. Training Data](#data_sources)
+ [3. Models](#models)
+ [3.1. PPO](#ppo)
+ [3.2. MCTS](#mcts)
+ [3.3. DQN](#dqn)
+ [3.4. DDQN](#ddqn)
+ [3.5. SAC](#sac)
+ [4. Testing](#testing)
+ [5. Visualisation](#visualisation)
## Directory contents
- `src/env/`: implementation of the OpenAI Gym environment utilized by the model
- `src/experiments/`: code and configurations associated with model experiments
- `src/logging/`: code associated with logging experiment data to Neptune.ai
- `main.py`: program for running experiments from `src/experiments`
- `definitions.py`: utility constans, such as directory paths
## 1. Instalation
### 1.1. Venv
https://stackoverflow.com/questions/60719286/actions-for-creating-venv-in-python-and-clone-a-git-repo
## Installation
In project root directory:
```
python3 -m venv venv
......@@ -25,44 +16,14 @@ source ./venv/bin/activate
pip3 install -r ./requirements.txt --no-cache-dir
```
## 2. Training Data
todo: tutaj że pobieramy dane skądś (bo nie chcemy jich trzymać w githubie)
i że robimy to skryptem z /src/data/.
Potem o tym że to trafia do data/external/ i potem cos sie z tym dzieje
## Experiment setup
1. Create a `neptune_key.yaml` file with a single line: `API_KEY: <your_neptune_ai_api_token>`.
2. Insert a CSV file with training data, named `predictions.csv` in the `data/external/` directory.
3. Choose which experiments you wish to run by uncommenting appropriate lines in the `main.py` program.
4. Configure the experiments chosen above by modifying the `.yaml` files associated with the experiments, e.g. `src/experiments/ppo/configs/continuous_reward.yaml`.
na razie mamy wrzucony jakiś kawałek od Ani - https://github.com/sebek-m/ensemblator-forecasterow/blob/7d49f819102dfb8751c9d97f658f936d77df6128/data/predictions.csv
trzeba go przerzucić do data/external/predictions.csv (trzeba wgl zrobić foldr data i external bo excluduje je z githuba)
## 3. Models
todo tutaj że robimy takie alorytmy bo to bo tamto, i że one są w src/models
### 3.1. PPO
todo tu o ppo (seba)
### 3.2. MCTS
todo tu o Monte Carlo tree search (franek)
### 3.3. DQN
todo tu o dqn (mikolaj)
### 3.4. DQN
todo tu o ddqn (mikolaj)
### 3.5. SAC
todo tu o soft actor critic (stasiek)
## 4. Testing
todo tu jak testować i gdzie te testy są - w tests/
i że robimy to czymś np pytestem/unittestem czy czymstam
w roocie projektu:
## Experiment startup
Simply run the `main.py` program via:
```
pytest -s
python3 main.py
```
## 5. Visualisation
todo tu o neptunie
Experiment data is automatically logged to Neptune.ai.
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment