MongoDB Too many open files during the reSync member

I have the problem in many past days. I have a MongoDB cluster with 3 shards and 3 replicas.

                1 2 3
A O S S
BSPS
C P O P

P - primary state
S - secondary state
O - Other state

Alphabets are replicas and numerical are shards.

I'm trying to resynchronize all data by 2TB in mongod Machine with other condition (A1 and C2). But I got an error while resynchronizing the data in this primary Mongod service because Too many open file

2019-03-16T16: 35: 22.351 + 0000 E -        [conn28204] Can not open / dev / urandom Too many open files in the system
2019-03-16T16: 35: 22.362 + 0000 I NETWORK  [listener] Error when accepting a new connection to 0.0.0.0:27017: Too many open files in the system
2019-03-16T16: 35: 22.362 + 0000 I NETWORK  [listener] Error when accepting a new connection to 0.0.0.0:27017: Too many open files in the system
2019-03-16T16: 35: 22.362 + 0000 I NETWORK  [listener] Error when accepting a new connection to 0.0.0.0:27017: Too many open files in the system

I'm trying to fix ulimit as a mongodb recommendation.

> ulimit -a

Core file size (blocks, -c) 0
Data Seq size (KB, -d) unlimited
Planning priority (-e) 0
File size (blocks, -f) unlimited
pending signals (-i) 241518
maximum locked memory (kbytes, -l) 64
Maximum memory size (kbyte, -m) unlimited
open files (-n) 64000
Pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
Real-time priority (-r) 0
Batch size (kbytes, -s) 8192
CPU time (seconds, -t) unlimited
max user processes (-u) 241518
virtual memory (kbyte, -v) unlimited
File locks (-x) unlimited

and set it in /etc/security/limits.conf

* Softfile 64000
* hard nose 64000
Root soft nose 64000
Root Hard Nose 64000

But all that is not my problem to fix. I still get a Mongod service Too many open file, I was stuck for 3 days. Does anyone have ideas or solutions?