- Imagenet Large-Scale Visual Recognition Challenge (ILSVRC)
AlexNet
- 11x11 kernal, 5 conv layer, 3 dense layer
- ReLU
- Overlapping pooling
- Data augmentation
- Dropout
VGGNet
- 3x3 kernal
- 1x1 conv layer for FC layer
- 3x3 kernal을 두번 쓰는 것이 5x5 kernal을 쓰는 것보다 더 효율적
GoogLeNet
- 22 layer, Network In Network
- Inception block - conv하기 전 1x1 conv
- parameter가 줄어듦
ResNet
- Idendity map (skip connenction)
- 더 깊은 network 가능
- bottleneck architecture - 1x1 conv
DenseNet
- skip connection을 더하지 않고 concat
- Dense block
- Transition block
- BatchNorm -> 1x1 conv -> 2x2 Avgpooling
'부스트캠프 AI Tech > Deep Learning' 카테고리의 다른 글
Transformer (0) | 2022.02.08 |
---|---|
RNN (0) | 2022.02.08 |
CV application (0) | 2022.02.08 |
Optimizer (0) | 2022.02.07 |
DL history (0) | 2022.02.07 |