1. 03 Mar, 2021 7 commits
    • Frederic Bregier's avatar
      Improve usage if native ByteBuf · 3b0e884b
      Frederic Bregier authored
      Improve performances by using native ByteBuf instead of using
      wrapped byte array.
      Include Benchmark as Integration tests
      3b0e884b
    • Frederic Bregier's avatar
      Fix issue on some Windows FTP client sending an OPTS command before USER · 5174335a
      Frederic Bregier authored
      On some FTP client, in particular under Windows, client are imediately sending an OPTS command just after the connexion.
      This is not intended according to RFC but we now allow this.
      
      It was preventing Windows client to connect to FTP server.
      5174335a
    • Frederic Bregier's avatar
      Increasing concurrency limit to 10.000 · 6fc05ebe
      Frederic Bregier authored
      The RUNLIMIT maximum value is turned to 10.000 while the default is still 1.000.
      
      This allows huge config (more than 4 cores) to increase the concurrency capacity.
      Note that the specified number (or default value) tends to allow up to 2 times
      active or pending threads in the JVM.
      
      Usually, a JVM thread limits is about 1.000 per core in practice.
      So a 2 cores with Waarp will be able to use up to 2.000 threads so a limit of 1.000 for RUNLIMIT.
      A 4 cores with Waarp will be able to use up to 4.000 threads so a limit of 2.000 for RUNLIMIT.
      The new maximum value (10.000) will allow up to 20.000 threads, probably for a 20 cores server.
      6fc05ebe
    • Frederic Bregier's avatar
      Improve file canRead method · 64e35e9f
      Frederic Bregier authored
      Since filesystem is sometime unsynchronized with Java, this method tends
      to try to leverage this issue by trying multiple times if not OK the first
      time (at most 30 ms).
      64e35e9f
    • Frederic Bregier's avatar
      Fix POM dependencies · 029c101e
      Frederic Bregier authored
      029c101e
    • Frederic Bregier's avatar
      Version 3.5.2 · a8bdc983
      Frederic Bregier authored
      Fix and optimize how network connections are closed when a local channel is over.
      a8bdc983
    • Frederic Bregier's avatar
      Improve network closing when necessary · 3e567f8f
      Frederic Bregier authored
      Reason:
      While network connections were closed normally, sometimes they were closed too early
      in the sense that another "local" connection (local channel) could still used it.
      
      Changes:
      Improve testing on reusability of network connection and centralization of this
      process, such that all local closings use the same algorithm and codes.
      
      This change allows to keep as much as possible existing connections if they can be
      reused soon. If not, within a certain delay (Timeout), the connection is closed.
      
      Result:
      Now loop based IT tests or direct multi transfers give good performances and no more
      issues.
      - Recv Direct using 3000 files of 360 KB gives an average of 38 transfers per second,
        while not optimal since even if multi-threaded, there is a latence on thread creation
        and execution
      - Loop transfers (from one server to another and repeating) of 700 concurrent files in average
        of 360 KB gives an average of 75 transfers per second.
      
      This last result is a good start for a real benchmark (211 Mbps, roughly the speed of disk).
      
      Conditions of benchmarks:
      - 2 and 4 cores, 8 GB RAM, 200 Mbps disks capacity
      - 2 Waarp servers started, 1 PostgreSQL database started (docker) on the same server
      
      Comparisons on loop transfers:
      - v3.5.1 : 4 cores gives about 30 transfers/second
      - v3.5.2 : 2 cores gives about 45 transfers/second
      - v3.5.2 : 4 cores gives about 75 transfers/second (roughly the disk speed), 4500/minute
      
      One can expect a non linear scalability extending vertically the server. However note that
      these benchmarks are on the same server so:
      - almost no latency between services (the 2 Waarp servers and the database)
      - but still 3 high consuming services on the same 4 cores (2 Waarp servers, each about 45%
        of CPU, and 1 PostgreSQL database about 10% CPU with about 2 Mbps SQL traffic network
        bandwidth)
      
      Probably 4 cores with 8 GB for the JVM and only one Waarp server, having in a separate
      host the PostgreSQL database, and of course the second Waarp server with its database
      somewhere else, shall give better performances, about 130 transfers/second
      (7800 transfers/minute).
      
      Scalability can be both vertical (increasing CPU mainly, but also Memory) and
      horizontal (using a loadbalancer, a shared database and a shared efficient
      filesystem).
      3e567f8f
  2. 30 Sep, 2020 11 commits
    • Frederic Bregier's avatar
      Cleaning code · 2b6b4f2f
      Frederic Bregier authored
      2b6b4f2f
    • Frederic Bregier's avatar
      e1309baa
    • Frederic Bregier's avatar
      Mutualize JRE6 and JRE11 build · 20d8ad6c
      Frederic Bregier authored
      This commit allows to build JRE6, JRE8 and JRE11 native packages using the two profiles: jre6, jre8 or jre11.
      - `mvn -P jre6 clean install`
      - `mvn -P jre8 clean install`
      - `mvn -P jre11 clean install`
      
      Multiples corrections were done to fix JRE8 and JRE11 full compatibility and improving test stability.
      Notably, MySQL driver in Java8 has changed some packages that were not compatible with
      current code for Java6. Now, the code is compatible with all versions from 6 to 11.
      
      Clean code in addition.
      20d8ad6c
    • Frederic Bregier's avatar
      Fix Spooled and XMLDAO tests · 50a75694
      Frederic Bregier authored
      50a75694
    • Frederic Bregier's avatar
      Optimize DB access using cache · 4a66f3f2
      Frederic Bregier authored
      Cache DB access for Business, Host and Rule only.
      Note, this is deactivated if the server is in "MultipleMonitor" mode (since Cache can be not coherent).
      
      Also improve TaskRunner for specific case of Update on rank update (only rank and stop date are updated, not all fields).
      4a66f3f2
    • Frederic Bregier's avatar
      Upgrade dependencies and fix logs · 0b3f63c7
      Frederic Bregier authored
      0b3f63c7
    • Frederic Bregier's avatar
      Fix Spooled task · 547324ea
      Frederic Bregier authored
      - Improve test
      - Improve with Rest V2 GET /filemonitors
      - Fix minor bugs
      547324ea
    • Frederic Bregier's avatar
      Compatibility from JDK 1.6 to 1.11 · 1e957fcc
      Frederic Bregier authored
      Due to removal or deprecation up to JDK 11, some corrections were needed to allow Waarp
      to run from JDK 1.6 up to 1.11.
      1e957fcc
    • Frederic Bregier's avatar
      Documentation · 3c75add6
      Frederic Bregier authored
      3c75add6
    • Frederic Bregier's avatar
      Improve Digest computations · 0ffbeb9d
      Frederic Bregier authored
      **Digest by packet**
      
      By default, R66 uses MD5 hashing algorithm. If one wants a more recent and
      less colisions algorithm, we recommend to use SHA-512 (7), with better performance
      than compared to SHA-256.
      
      Moreover, the Digest for each block was using a new Digest object while during the
      transfer it could be the same (reset for each block). This fixes by allocating
      only once this Digest object.
      
      This only concerns when the transfer is using a Digest algorithm per block (mode 3, 4, 7 or 8,
      as respectively SENDMD5MODE, RECVMD5MODE, SENDMD5THROUGHMODE, RECVMD5THROUGHMODE).
      
      From consistency point of view, a transfer should use a MD5 mode (with the choice of
      digest you want, may be SHA-512 is quite affordable). However this is not
      mandatory and if not used will bring about 10 to 20% of improvement of speed.
      
      **Full Digest**
      Moreover, when both partners are using a final global hash (globaldigest = true in
      XML configuration file, by default) computed across all block incrementally,
      previously a local digest was computed by scanning the full file again.
      
      Now if the globaldigest are the same on both sides, if computed, then there is
      no more need to scan again the file.
      Finally, if one wants to disable this local digest computation (both incremental
      and full scan at the end), the new property in the XML configuration localdigest
      can be set to false (true by default).
      
      From consistency point of view, one could disable totally those local digest
      computations, with about 20% of speed improvement while not decreasing
      the consistency.
      From concistency point of view, a transfer shall maintain the global digest
      computation, using SHA-512 (same value than for transfer mode).
      However disabling it will improve the performance about 25% of speed.
      
      Those fixes bring about 10 to 30% of speedup of transfers.
      
      **General recommendations**
      If one has a very limited resource server and wants to ignore all consistency
      checks, then :
      - using transfer with no MD5 mode will allow to improve about
        10% to 20% the speed of transfer.
      - disabling final local check (`localdigest` to `False`) will allow to improve about
        20% more to the speed of transfer (since optimization is already there).
      - disabling global check (`globaldigest` to `False`) will allow to improve about
        25% more to the speed of transfer.
      
      So globally, one can improve up to 50% (we were able to reach 110 MBPS or 880 MbPS).
      
      However, it is not recommended to have no check at all during transfers.
      So we recommend to at least active global digest (`globaldigest` to `True` or unset
      as default value), which will bring about 30% of improvements of speed (compared
      to all options actives).
      
      **Other recommendations**
      - Java arguments: use `-Xms2048m -Xmx2048m`
      - XML configuration:
        - `serverthread` to nb of core (default) or less if you want to limit the CPU usage
        - `clientthread` to 10 * serverthread (default) or less if you want to limit the CPU usage
        - *ignore* or set `usefastmd5` to `False` (default) (not anymore efficient)
        - `digest` to `7` (SHA-512 while MD5 being the default) or if you accept more collisions, `2` for MD5 but
          really more efficient (about 50%)
        - *ignore* or set `globaldigest` to `True` (default) (consistency check)
        - `localdigest` to `False` (`True` is the default) (optional consistency check, this fix limits greatly
          its impact however)
        - *ignore* or set `runlimit` to `1000` (default) or less if you want to limit the CPU usage
          (concurrency of submitted transfers, do not set to lower than `10`)
        - `blocksize` to a multiple of 16KB (64KB being the default, but do not set more than 256KB)
          (the higher, the smaller the number of packets)
        - `memorylimit` to a value acceptable for HTTP and REST services (default 1GB, could be set to less)
        - `sessionlimit` and `globallimit` to an acceptable bandwidth limitation (for instance 100 MBPS),
          default being no limit (note: limit is in Bytes per second and not bits per second)
        - *ignore* or set `usenio` to `False` (default) (no more improvement with true)
        - `usecpulimit` to `True` (default `False`, as for `usejdkcpulimit` that could be ignored),
          `cpulimit` to a value less than `1.0` (`0.6` for instance),
          `connlimit` still to `0` (default, unlimited): this will adjust bandwidth automatically to prevent
          CPU high consuming over 60% (here for the example since `0.6`) (global bandwidth is adapted
          automatically)
      0ffbeb9d
    • Frederic Bregier's avatar
      Clean up Sleep and other bad programming style · 6fde4dde
      Frederic Bregier authored
      Clean up Thread.sleep() calls
      Clean up bad programming style
      Include also new Junit Tests
      6fde4dde
  3. 10 Sep, 2020 1 commit
  4. 04 Sep, 2020 8 commits
    • Frederic Bregier's avatar
      Extra code cleaning · 4d5f0a01
      Frederic Bregier authored
      4d5f0a01
    • Frederic Bregier's avatar
      Extra clean up code · 2ce86e08
      Frederic Bregier authored
      2ce86e08
    • Frederic Bregier's avatar
      Optimize Network Read and related memory usage · 9459aeaa
      Frederic Bregier authored
      Netty offers the possibility (off by default) to read at demand from Network.
      The idea is to:
      - start the reading once channel active
      - request next read from network once packet fully read (so after decoding in Network handlers)
      
      This limits again the memory pressure. Now even 1GB is enough for a Server, while 2GB is still recommended.
      
      Add some clean codes.
      9459aeaa
    • Frederic Bregier's avatar
      Fix Dependencies and Documentation · 7919a9cc
      Frederic Bregier authored
      7919a9cc
    • Frederic Bregier's avatar
      Cleanup V3.5 · a58e1bc0
      Frederic Bregier authored
      Global cleanup of code
      a58e1bc0
    • Frederic Bregier's avatar
      Fix FileInfo and TransferInfo · 9b7ef41b
      Frederic Bregier authored
      In some old revision, an inversion was done on FileInfo and TransferInfo.
      
      FileInfo is the information given by the end user through `-info` and is linked to the file transfer.
      This information contains the `FollowId` field (as Json Map) to allow to forward this info in retransfer.
      
      TransferInfo is the internal information, stored by R66 servers and clients, into database about the transfer.
      It could be the original size, some special information (such as Digest, UUID) but also the Follow ID.
      Its internal representation is a JSON Map saved as a String (so escaped).
      
      This fixes this partial inversion and also improves the HTML rendering by explicitly presenting the `Follow Id`.
      This fixes also some cases where the Follow Id was ignored during retransfer (`TransferTask`).
      9b7ef41b
    • Frederic Bregier's avatar
      Optimize NetworkPacket and DataBlock vs ByteBuf · 08037474
      Frederic Bregier authored
      DataBlock is the most used packet in R66.
      This one was using a ByteBuf internal representation (using Netty default).
      This could lead in extra pressure to Direct Memory. This fix switch to byte arrays, leading to less
      Direct memory used.
      
      Direct Memory was reduced as much as possible.
      Default configuration is adapted at startup but could be override:
      - `-Dio.netty.noPreferDirect=true` is setup, it limits the usage of Direct Buffer except Netty network incoming messages
      - `-Dio.netty.maxDirectMemory=0` is setup, Netty will use the max JDK Direct memory (recommended).
      
      Recommended value for `maxDirectMemory` is 0. Giving too small valud (>0) coud cause issues.
      Value of -1 is default behavior of Netty and is also a good option (Netty uses twice Direct memory maximum from JDK without Cleaner).
      Value of 0 lets Netty allocate within Direct Memory, using JDK max, but still stable.
      
      Note that the test SecnarioLoopPostgreSqlIT could be launched with various arguments to test:
      - `-Xms1024m -Xmx1024m` (or more, such as `2048m`) to limit default memory to use (Heap size)
      
      Note however that Direct memory is still used by Netty itself for incoming messages. Disallow totally Direct buffer could leads to serious issues on memory usage (GC overacting). Therefore, extra arguments such as `-Dio.netty.noUnsafe=true` is intensively not recommended.
      
      As previously, one could also limit usage of Direct Memory by specifying in XML configuration file in `limit` section the value `usenio` to `False`.
      
      NetworkPacket was a wrapped buffer of multiple sub-buffers (LocalPacket such as DataPacket).
      This contains also optimizations on NetworkPacket to make only one Buffer from all items.
      08037474
    • Frederic Bregier's avatar
      Reduce Direct Memory impact · 90ceb6b3
      Frederic Bregier authored
      Under certain circumstances, Direct Memory reach a maximum of usage.
      This fix tries to limit as possible this impact.
      
      - File copy: no more usage of Guava where big block where used (512KB) and reduced direct memory (FileChannel)
      - File reading: now using standard buffer and no more direct memory
      - Hash computing: uses a byte array instead of ByteBuf from Netty, except if necessary
      - KeepAlive algorithm fixed where under certain circumstances it failed while long task delay could lead to TimeOut erroneously
      - Free quickier some buffers from Netty
      - For LocalPacket, makes as possible unique buffer from default allocator
      - One big and long IT test added to check the correctness of this (no Out Of Direct Memory detected)
      - Update dependencies
      90ceb6b3
  5. 28 Aug, 2020 7 commits
    • Frederic Bregier's avatar
      Add possibility to specify the IPs to use · e3c84233
      Frederic Bregier authored
      In the server XML configuration file, one can specify which addresses (IPs)
      to use for the various standard services. By default, as previously, all
      interfaces are bound with the specified port. One can therefore limit the
      interfaces that are supporting those services.
      
      Examples:
      
      .. code-block:: xml
      
            <network>
              <serverport>6666</serverport>
              <!-- 1 adresse définie en loop -->
              <serveraddresses>127.0.0.1</serveraddresses>
              <serversslport>6667</serversslport>
              <!-- 2 adresses définies -->
              <serverssladdresses>192.168.0.2,10.1.0.10</serverssladdresses>
              <serverhttpport>8066</serverhttpport>
              <!-- Toutes les interfaces seront utilisées -->
              <serverhttpaddresses/>
              <serverhttpsport>8067</serverhttpsport>
              <!-- 1 adresse définie en local -->
              <serverhttpsaddresses>192.168.0.2</serverhttpsaddresses>
            </network>
      
      .. code-block:: xml
      
            <network>
              <!-- Toutes les interfaces seront utilisées -->
              <serverport>6666</serverport>
              <serversslport>6667</serversslport>
              <serverhttpport>8066</serverhttpport>
              <serverhttpsport>8067</serverhttpsport>
            </network>
      e3c84233
    • Frederic Bregier's avatar
      Fix issue #65 · 7882b70f
      Frederic Bregier authored
      Fix issue #65 where an infinite loop could occurs since increment in loop is misplaced.
      7882b70f
    • Frederic Bregier's avatar
      Fix issue #64 on XML Rule not loaded · 397a548d
      Frederic Bregier authored
      When client or server is loaded using no database, therefore using XMLDBDAO,
      the rules were not loaded correctly.
      Fix issue #64
      397a548d
    • Frederic Bregier's avatar
      Fix issue #62 on self transfers not saved · c337eb52
      Frederic Bregier authored
      In some rare cases, the self transfer was deleted by client (direct transfer, not submited transfer).
      Fix issue #62
      c337eb52
    • Frederic Bregier's avatar
      Fix issue #63 System actions unavailable · 32ebf911
      Frederic Bregier authored
      Fix issue #63 where some System actions were unavailable in default vision
      32ebf911
    • Frederic Bregier's avatar
      Map in Transfer could break HTML rendering #61 · 40787df0
      Frederic Bregier authored
      Fix Issue #61 by escaping the Map in TransferInfo and unescaping it while using the map
      The escaping is minimal (single `\`).
      40787df0
    • Frederic Bregier's avatar
      Fix issue #60 on OutputFormat options · aab1df9a
      Frederic Bregier authored
      OutputFormat options were lost in last release. This fixes the issues #60 and #78.
      aab1df9a
  6. 26 Aug, 2020 1 commit
  7. 28 Jul, 2020 1 commit
  8. 20 Jul, 2020 1 commit
  9. 18 Jul, 2020 3 commits