Skip to content
Snippets Groups Projects
Commit 66dec528 authored by Rockey's avatar Rockey Committed by GitHub
Browse files

[Fix] Fix the bug that setr cannot load pretrain (#1293)

* [Fix] Fix the bug that setr cannot load pretrain

* delete new pretrain
parent 1abf76de
No related branches found
No related tags found
No related merge requests found
......@@ -36,6 +36,23 @@ This head has two version head.
}
```
## Usage
You can download the pretrain from [here](https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_384-b3be5167.pth). Then you can convert its keys with the script `vit2mmseg.py` in the tools directory.
```shell
python tools/model_converters/vit2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}
```
E.g.
```shell
python tools/model_converters/vit2mmseg.py \
jx_vit_large_p16_384-b3be5167.pth pretrain/vit_large_p16.pth
```
This script convert the model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
## Results and models
### ADE20K
......
......@@ -8,7 +8,8 @@ model = dict(
backbone=dict(
img_size=(512, 512),
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
decode_head=dict(num_classes=150),
auxiliary_head=[
dict(
......
......@@ -8,7 +8,8 @@ model = dict(
backbone=dict(
img_size=(512, 512),
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
decode_head=dict(num_classes=150),
auxiliary_head=[
dict(
......
......@@ -8,7 +8,8 @@ model = dict(
backbone=dict(
img_size=(512, 512),
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
decode_head=dict(num_classes=150),
auxiliary_head=[
dict(
......
......@@ -6,7 +6,8 @@ model = dict(
pretrained=None,
backbone=dict(
drop_rate=0,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
test_cfg=dict(mode='slide', crop_size=(768, 768), stride=(512, 512)))
optimizer = dict(
......
......@@ -7,7 +7,8 @@ model = dict(
pretrained=None,
backbone=dict(
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
test_cfg=dict(mode='slide', crop_size=(768, 768), stride=(512, 512)))
optimizer = dict(
......
......@@ -9,7 +9,8 @@ model = dict(
pretrained=None,
backbone=dict(
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
auxiliary_head=[
dict(
type='SETRUPHead',
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment