File size: 6,899 Bytes
cf5659c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
# Compute Resources

## Login Instance

This is the shell you get into when ssh'ng from outside

- Networked (except ssh to outside)
- 1 core per user
- 5 GB of RAM per user
- 30 min of CPU time per process

## Pre/post processing Instance

Activated with `--partition=prepost`

- Networked
- only 4 nodes
- 2 to 20 hours
- No limitations of the login shell
- 1x V100-16GB
- The computing hours are not deducted from your allocation

to request:
```
srun --pty --partition=prepost --account=six@cpu --nodes=1 --ntasks=1 --cpus-per-task=10  --hint=nomultithread --time=1:00:00 bash --rcfile $six_ALL_CCFRWORK/start-prod
```

or to work interactively there, `srun` into the box (though no control which of the 4 you get):

```
srun -p prepost -A six@cpu --time=20:00:00 --pty bash
```

To choose a specific box (if some are too overload by other users), one could ssh directly to that partition via:
```
ssh jean-zay-pp          # from inside
ssh jean-zay-pp.idris.fr # from outside
```
There are 4 boxes, so `jean-zay-pp1`, ..., `jean-zay-pp4`. It's possible that larger numbers have less users, but not necessarily.

In this case there is no need to do SLURM.

But in this approach only 30min will be given before any running process will be killed. Just like the login shell. I think the only difference is more CPU usage is given here before the process is killed than on the login shell.

Note: `--partition=compil` too has internet, but can't ssh there.

In general the `compil` partition is usually less busy than `prepost`.


## GPU Instances

- No network to outside world
- 160 GB of usable memory. The memory allocation is 4 GB per reserved CPU core if hyperthreading is deactivated (`--hint=nomultithread`). So max per node is `--cpus-per-task=40`

To select this type of partition use `--account=six@gpu`.


## CPU Instances

- All cpus of the same partition are the same
- Different partitions are likely to have different cpus

For example on `gpu_p1` partitions (4x v100-32gb)

```
$ lscpu | grep name
Model name:          Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
```

To select this type of partition use `--account=six@cpu`.


## Quotas

Group/project (`six`):

- `$six_ALL_CCFRSCRATCH` - 400TB / ??? inodes fastest (full SSD),  →  files removed after 30 days without access
- `$six_ALL_CCFRWORK` - 25TB / 500k inodes (slower than SCRATCH) → sources, constantly used input/output files
- `$six_ALL_CCFRSTORE` - 100TB / 100k inodes (slow) → for long term storage in tar files (very few inodes!)
- `/gpfsssd/worksf/projects/rech/six/commun/` - 1TB / 3M inodes → for conda and python git clones that take tens of thousands of inodes

Personal:

- `$HOME` - 3GB / 150k inodes (for small files)
- `$SCRATCH` - fastest (full SSD), no quota, files removed after 30 days without access
- `$WORK` - Shared with the `$six_ALL_CCFRWORK` quota, that is `du -sh $six_ALL_CCFRWORK/..`
- `$STORE` - Shared with the  `$six_ALL_CCFRSTORE` quota, that is `du -sh $six_ALL_CCFRSTORE/..`

Note that WORK and STORE group quotas of the project include all project's users' WORK and STORE usage correspondingly.

[Detailed information](http://www.idris.fr/eng/jean-zay/cpu/jean-zay-cpu-calculateurs-disques-eng.html)

Checking usage:
```
idrquota -m # $HOME @ user
idrquota -s -p six # $STORE @ shared (this is updated every 30min)
idrquota -w -p six # $WORK @ shared
```


if you prefer it the easy way here is an alias to add to `~/.bashrc`:
```
alias dfi=' \
echo \"*** Total \(six\) ***\"; \
idrquota -w -p six; \
idrquota -s -p six; \
echo SCRATCH: $(du -hs /gpfsscratch/rech/six/ | cut -f1) \(out of 400TB\); \
echo WORKSF: $(du -hs /gpfsssd/worksf/projects/rech/six | cut -f1) \(out of 2TB\); \
echo WORKSF: $(du -hs --inodes /gpfsssd/worksf/projects/rech/six | cut -f1) inodes \(out of 3M\); \
echo; \
echo \"*** Personal ***\"; \
idrquota -m; \
echo WORK: $(du -hs $WORK | cut -f1); \
echo WORK: $(du -hs --inodes $WORK | cut -f1) inodes; \
echo STORE: $(du -hs $STORE | cut -f1); \
echo STORE: $(du -hs --inodes $STORE | cut -f1) inodes; \
echo SCRATCH: $(du -hs $SCRATCH | cut -f1); \
echo SCRATCH: $(du -hs --inodes $SCRATCH | cut -f1) inodes; \
'
```
This includes the report on usage of personal WORK and SCRATCH partitions.



## Directories

- `$six_ALL_CCFRSCRATCH` - for checkpoints - make sure to copy important ones to WORK or tarball to STORE
- `$six_ALL_CCFRWORK` - for everything else
- `$six_ALL_CCFRSTORE` - for long term storage in tar files (very few inodes!)
- `/gpfsssd/worksf/projects/rech/six/commun/` - for conda and python git clones that take tens of thousands of inodes - it's a small partition with a huge number of inodes. 1TB and 3M inodes.
XXX: update this and above once env var was created.


More specifically:

- `$six_ALL_CCFRWORK/cache_dir` - `CACHE_DIR` points here
- `$six_ALL_CCFRWORK/checkpoints` - symlink to `$six_ALL_CCFRWORK/checkpoints` - point slurm scripts here
- `$six_ALL_CCFRWORK/code` - clones of repos we use as source (`transformers`, `megatron-lm`, etc.)
- `$six_ALL_CCFRWORK/conda` - our production conda environment
- `$six_ALL_CCFRWORK/datasets` - cached datasets (normally under `~/.cache/huggingface/datasets`)
- `$six_ALL_CCFRWORK/datasets-custom` - Manually created datasets are here (do not delete these - some take many hours to build):
- `$six_ALL_CCFRWORK/downloads` -  (normally under `~/.cache/huggingface/downloads`)
- `$six_ALL_CCFRWORK/envs` - custom scripts to create easy to use environments
- `$six_ALL_CCFRWORK/models-custom` - manually created or converted models
- `$six_ALL_CCFRWORK/modules` -  (normally under `~/.cache/huggingface/modules`)



## Diagnosing the Lack of Disc Space

To help diagnose the situations when we are short of disc space here are some tools:

Useful commands:

* Get current dir's sub-dir usage breakdown sorted by highest usage first:
```
du -ahd1 | sort -rh
```

* Check that users don't consume too much of their personal `$WORK` space, which goes towards the total WORK space limit.

```
du -ahd1 $six_ALL_CCFRWORK/.. | sort -rh
```


## Efficient tar-balling to STORE

When short on space you don't want to create large tarballs in the WORK dir, instead tar directly to the destination, e.g.

e.g. w/o gzip since we already have arrow binary files

```
mkdir -p $six_ALL_CCFRSTORE/datasets
cd $six_ALL_CCFRWORK/datasets
tar -cvf $six_ALL_CCFRSTORE/datasets/openwebtext.tar openwebtext
```


e.g. w/ gzip for non-binary data
```
tar -czvf $six_ALL_CCFRSTORE/datasets/openwebtext.tgz openwebtext
```

If the file is large and takes some resources to build, `tar` will get killed, in such case you can't do it from the login instance and have to use one of the beefier instances. e.g.:
```
srun --pty --nodes=1 --ntasks=1 -A six@cpu --cpus-per-task=40 --hint=nomultithread --time=2:00:00 bash --rcfile $six_ALL_CCFRWORK/start-prod
tar ...
```
and if that's not enough do a slurm job