Skip to content
Snippets Groups Projects
  1. Sep 01, 2018
  2. Jul 30, 2018
  3. Jun 14, 2018
  4. May 21, 2018
  5. May 12, 2018
  6. Apr 09, 2018
  7. Mar 08, 2018
  8. Feb 27, 2018
  9. Feb 06, 2018
  10. Nov 22, 2017
    • Wang Shilong's avatar
      LU-9796 ldiskfs: improve inode allocation performance · 3f0a7241
      Wang Shilong authored
      
      Backport following upstream patches:
      
      ------
      ext4: cleanup goto next group
      
      avoid duplicated codes, also we need goto
      next group in case we found reserved inode.
      ------
      
      ext4: reduce lock contention in __ext4_new_inode
      
      While running number of creating file threads concurrently,
      we found heavy lock contention on group spinlock:
      
      FUNC                           TOTAL_TIME(us)       COUNT        AVG(us)
      ext4_create                    1707443399           1440000      1185.72
      _raw_spin_lock                 1317641501           180899929    7.28
      jbd2__journal_start            287821030            1453950      197.96
      jbd2_journal_get_write_access  33441470             73077185     0.46
      ext4_add_nondir                29435963             1440000      20.44
      ext4_add_entry                 26015166             1440049      18.07
      ext4_dx_add_entry              25729337             1432814      17.96
      ext4_mark_inode_dirty          12302433             5774407      2.13
      
      most of cpu time blames to _raw_spin_lock, here is some testing
      numbers with/without patch.
      
      Test environment:
      Server : SuperMicro Sever (2 x E5-2690 v3@2.60GHz, 128GB 2133MHz
               DDR4 Memory, 8GbFC)
      Storage : 2 x RAID1 (DDN SFA7700X, 4 x Toshiba PX02SMU020 200GB
                Read Intensive SSD)
      
      format command:
              mkfs.ext4 -J size=4096
      
      test command:
              mpirun -np 48 mdtest -n 30000 -d /ext4/mdtest.out -F -C \
                      -r -i 1 -v -p 10 -u #first run to load inode
      
              mpirun -np 48 mdtest -n 30000 -d /ext4/mdtest.out -F -C \
                      -r -i 3 -v -p 10 -u
      
      Kernel version: 4.13.0-rc3
      
      Test  1,440,000 files with 48 directories by 48 processes:
      
      Without patch:
      
      File Creation   File removal
      79,033          289,569 ops/per second
      81,463          285,359
      79,875          288,475
      
      With patch:
      File Creation   File removal
      810669          301694
      812805          302711
      813965          297670
      
      Creation performance is improved more than 10X with large
      journal size. The main problem here is we test bitmap
      and do some check and journal operations which could be
      slept, then we test and set with lock hold, this could
      be racy, and make 'inode' steal by other process.
      
      However, after first try, we could confirm handle has
      been started and inode bitmap journaled too, then
      we could find and set bit with lock hold directly, this
      will mostly gurateee success with second try.
      
      Signed-off-by: default avatarWang Shilong <wshilong@ddn.com>
      Change-Id: I234ff3027c8d96155d374c56b12aab7c4dc0dafd
      Reviewed-on: https://review.whamcloud.com/29032
      
      
      Tested-by: Jenkins
      Reviewed-by: default avatarAndreas Dilger <andreas.dilger@intel.com>
      Reviewed-by: default avatarGu Zheng <gzheng@ddn.com>
      Tested-by: default avatarMaloo <hpdd-maloo@intel.com>
      Reviewed-by: default avatarOleg Drokin <oleg.drokin@intel.com>
      3f0a7241
  11. Oct 24, 2017
  12. Oct 19, 2017
  13. Oct 16, 2017
  14. Aug 28, 2017
  15. Aug 17, 2017
  16. Aug 01, 2017
  17. Jul 29, 2017
  18. Jul 19, 2017
  19. Jun 07, 2017
  20. Jun 03, 2017
  21. May 29, 2017
  22. May 24, 2017
  23. May 20, 2017
  24. May 09, 2017
  25. Apr 28, 2017
  26. Apr 18, 2017
  27. Apr 13, 2017
  28. Apr 06, 2017
  29. Mar 23, 2017
  30. Mar 14, 2017
  31. Mar 09, 2017
  32. Feb 07, 2017
Loading