b=13595,13608
r=nathan,adilger,shadow,green - separates client and server namespaces. Each "side" has own list and own lock; - separate pool shrinker to client and server shrinkers which work each with own list. This is needed to avoid mixing up server and client pool cached resources which are too different. Client's locks may be canceled in sync manner and we can return to VM number of still cached resources. And server resources (locks) are not removed in sync way, we just change SLV and expect that client will cancel something. To VM we return 0 as number of canceled locks; - in ldlm_pools_shrink() use down_trylock() to avoid locking ns sem when it is already locked. This fixes hang up in test 116 if memory pressure comes. This issue is due to deadlock bewteen shrinker and pool thread if client and server run on same host; - move lru add stuff into separate func; - change l_last_used and move lock to tail of lru for case of FL_TEST_LOCK to make sure that it will still hang for some time in lru afer that. So that, if we looked for look even with FL_TEST_LOCK this means that we may need its resourse yet some time and better to stay lock in cache.
Showing
- lustre/include/lustre_dlm.h 2 additions, 3 deletionslustre/include/lustre_dlm.h
- lustre/ldlm/ldlm_internal.h 29 additions, 0 deletionslustre/ldlm/ldlm_internal.h
- lustre/ldlm/ldlm_lock.c 49 additions, 17 deletionslustre/ldlm/ldlm_lock.c
- lustre/ldlm/ldlm_lockd.c 8 additions, 6 deletionslustre/ldlm/ldlm_lockd.c
- lustre/ldlm/ldlm_pool.c 50 additions, 33 deletionslustre/ldlm/ldlm_pool.c
- lustre/ldlm/ldlm_request.c 1 addition, 6 deletionslustre/ldlm/ldlm_request.c
- lustre/ldlm/ldlm_resource.c 22 additions, 20 deletionslustre/ldlm/ldlm_resource.c
Loading
Please register or sign in to comment