Skip to content
Snippets Groups Projects
MMSegmentation_Tutorial.ipynb 19.1 KiB
Newer Older
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/master/demo/MMSegmentation_Tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "FVmnaxFJvsb8"
   },
   "source": [
    "# MMSegmentation Tutorial\n",
    "Welcome to MMSegmentation! \n",
    "\n",
    "In this tutorial, we demo\n",
    "* How to do inference with MMSeg trained weight\n",
    "* How to train on your own dataset and visualize the results. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "QS8YHrEhbpas"
   },
   "source": [
    "## Install MMSegmentation\n",
    "This step may take several minutes. \n",
    "\n",
    "We use PyTorch 1.6 and CUDA 10.1 for this tutorial. You may install other versions by change the version number in pip install command. "
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "32a47fe3-f10d-47a1-f6b9-b7c235abdab1"
   },
   "source": [
jyh's avatar
jyh committed
    "# Check nvcc version\n",
    "!nvcc -V\n",
    "# Check GCC version\n",
    "!gcc --version"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "14bd14b0-4d8c-4fa9-e3f9-da35c0efc0d5"
   },
   "source": [
jyh's avatar
jyh committed
    "# Install PyTorch\n",
    "!conda install pytorch=1.6.0 torchvision cudatoolkit=10.1 -c pytorch\n",
jyh's avatar
jyh committed
    "# Install MMCV\n",
    "!pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6/index.html"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "10c3b131-d4db-458c-fc10-b94b1c6ed546"
   },
   "source": [
jyh's avatar
jyh committed
    "!rm -rf mmsegmentation\n",
    "!git clone https://github.com/open-mmlab/mmsegmentation.git \n",
    "%cd mmsegmentation\n",
    "!pip install -e ."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "83bf0f8e-fc69-40b1-f9fe-0025724a217c"
   },
   "source": [
jyh's avatar
jyh committed
    "# Check Pytorch installation\n",
    "import torch, torchvision\n",
    "print(torch.__version__, torch.cuda.is_available())\n",
    "\n",
    "# Check MMSegmentation installation\n",
    "import mmseg\n",
    "print(mmseg.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "eUcuC3dUv32I"
   },
   "source": [
    "## Run Inference with MMSeg trained weight"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "b7b2aafc-edf2-43e4-ea43-0b5dd0aa4b4a"
   },
   "source": [
jyh's avatar
jyh committed
    "!mkdir checkpoints\n",
    "!wget https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth -P checkpoints"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "H8Fxg8i-wHJE"
   },
   "source": [
jyh's avatar
jyh committed
    "from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot\n",
    "from mmseg.core.evaluation import get_palette"
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "umk8sJ0Xuace"
   },
   "source": [
jyh's avatar
jyh committed
    "config_file = '../configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py'\n",
    "checkpoint_file = '../checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'"
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "5e45f4f6-5bcf-4d04-bb9c-0428ee84a576"
   },
   "source": [
jyh's avatar
jyh committed
    "# build the model from a config file and a checkpoint file\n",
    "model = init_segmentor(config_file, checkpoint_file, device='cuda:0')"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "izFv6pSRujk9"
   },
   "source": [
    "# test a single image\n",
jyh's avatar
jyh committed
    "img = './demo.png'\n",
    "result = inference_segmentor(model, img)"
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 504
    "outputId": "7c55f713-4085-47fd-fa06-720a321d0795"
   },
   "source": [
jyh's avatar
jyh committed
    "# show the results\n",
    "show_result_pyplot(model, img, result, get_palette('cityscapes'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Ta51clKX4cwM"
   },
   "source": [
    "## Train a semantic segmentation model on a new dataset\n",
    "\n",
    "To train on a customized dataset, the following steps are necessary. \n",
    "1. Add a new dataset class. \n",
    "2. Create a config file accordingly. \n",
    "3. Perform training and evaluation. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AcZg6x_K5Zs3"
   },
   "source": [
    "### Add a new dataset\n",
    "\n",
    "Datasets in MMSegmentation require image and semantic segmentation maps to be placed in folders with the same prefix. To support a new dataset, we may need to modify the original file structure. \n",
    "In this tutorial, we give an example of converting the dataset. You may refer to [docs](https://github.com/open-mmlab/mmsegmentation/docs/en/tutorials/new_dataset.md) for details about dataset reorganization. \n",
    "We use [Stanford Background Dataset](http://dags.stanford.edu/projects/scenedataset.html) as an example. The dataset contains 715 images chosen from existing public datasets [LabelMe](http://labelme.csail.mit.edu), [MSRC](http://research.microsoft.com/en-us/projects/objectclassrecognition), [PASCAL VOC](http://pascallin.ecs.soton.ac.uk/challenges/VOC) and [Geometric Context](http://www.cs.illinois.edu/homes/dhoiem/). Images from these datasets are mainly outdoor scenes, each containing approximately 320-by-240 pixels. \n",
    "In this tutorial, we use the region annotations as labels. There are 8 classes in total, i.e. sky, tree, road, grass, water, building, mountain, and foreground object. "
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "74a126e4-c8a4-4d2f-a910-b58b71843a23"
   },
   "source": [
jyh's avatar
jyh committed
    "# download and unzip\n",
    "!wget http://dags.stanford.edu/data/iccv09Data.tar.gz -O stanford_background.tar.gz\n",
    "!tar xf stanford_background.tar.gz"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 377
    "outputId": "c432ddac-5a50-47b1-daac-5a26b07afea2"
   },
   "source": [
jyh's avatar
jyh committed
    "# Let's take a look at the dataset\n",
    "import mmcv\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "img = mmcv.imread('iccv09Data/images/6000124.jpg')\n",
    "plt.figure(figsize=(8, 6))\n",
    "plt.imshow(mmcv.bgr2rgb(img))\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "L5mNQuc2GsVE"
   },
   "source": [
    "We need to convert the annotation into semantic map format as an image."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "WnGZfribFHCx"
   },
   "source": [
    "import os.path as osp\n",
    "import numpy as np\n",
    "from PIL import Image\n",
    "# convert dataset annotation to semantic segmentation map\n",
    "data_root = 'iccv09Data'\n",
    "img_dir = 'images'\n",
    "ann_dir = 'labels'\n",
    "# define class and plaette for better visualization\n",
    "classes = ('sky', 'tree', 'road', 'grass', 'water', 'bldg', 'mntn', 'fg obj')\n",
    "palette = [[128, 128, 128], [129, 127, 38], [120, 69, 125], [53, 125, 34], \n",
    "           [0, 11, 123], [118, 20, 12], [122, 81, 25], [241, 134, 51]]\n",
    "for file in mmcv.scandir(osp.join(data_root, ann_dir), suffix='.regions.txt'):\n",
    "  seg_map = np.loadtxt(osp.join(data_root, ann_dir, file)).astype(np.uint8)\n",
    "  seg_img = Image.fromarray(seg_map).convert('P')\n",
    "  seg_img.putpalette(np.array(palette, dtype=np.uint8))\n",
    "  seg_img.save(osp.join(data_root, ann_dir, file.replace('.regions.txt', \n",
    "                                                         '.png')))"
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 377
    "outputId": "92b9bafc-589e-48fc-c9e9-476f125d6522"
   },
   "source": [
    "# Let's take a look at the segmentation map we got\n",
    "import matplotlib.patches as mpatches\n",
    "img = Image.open('iccv09Data/labels/6000124.png')\n",
    "plt.figure(figsize=(8, 6))\n",
    "im = plt.imshow(np.array(img.convert('RGB')))\n",
    "\n",
    "# create a patch (proxy artist) for every color \n",
    "patches = [mpatches.Patch(color=np.array(palette[i])/255., \n",
    "                          label=classes[i]) for i in range(8)]\n",
    "# put those patched as legend-handles into the legend\n",
    "plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., \n",
    "           fontsize='large')\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "WbeLYCp2k5hl"
   },
   "source": [
    "# split train/val set randomly\n",
    "split_dir = 'splits'\n",
    "mmcv.mkdir_or_exist(osp.join(data_root, split_dir))\n",
    "filename_list = [osp.splitext(filename)[0] for filename in mmcv.scandir(\n",
    "    osp.join(data_root, ann_dir), suffix='.png')]\n",
    "with open(osp.join(data_root, split_dir, 'train.txt'), 'w') as f:\n",
    "  # select first 4/5 as train set\n",
    "  train_length = int(len(filename_list)*4/5)\n",
    "  f.writelines(line + '\\n' for line in filename_list[:train_length])\n",
    "with open(osp.join(data_root, split_dir, 'val.txt'), 'w') as f:\n",
    "  # select last 1/5 as train set\n",
    "  f.writelines(line + '\\n' for line in filename_list[train_length:])"
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HchvmGYB_rrO"
   },
   "source": [
    "After downloading the data, we need to implement `load_annotations` function in the new dataset class `StanfordBackgroundDataset`."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "LbsWOw62_o-X"
   },
   "source": [
    "from mmseg.datasets.builder import DATASETS\n",
    "from mmseg.datasets.custom import CustomDataset\n",
    "\n",
    "@DATASETS.register_module()\n",
    "class StanfordBackgroundDataset(CustomDataset):\n",
    "  CLASSES = classes\n",
    "  PALETTE = palette\n",
    "  def __init__(self, split, **kwargs):\n",
    "    super().__init__(img_suffix='.jpg', seg_map_suffix='.png', \n",
    "                     split=split, **kwargs)\n",
    "    assert osp.exists(self.img_dir) and self.split is not None\n",
    "\n",
    "    "
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "yUVtmn3Iq3WA"
   },
   "source": [
    "### Create a config file\n",
    "In the next step, we need to modify the config for the training. To accelerate the process, we finetune the model from trained weights."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "id": "Wwnj9tRzqX_A"
   },
   "source": [
    "from mmcv import Config\n",
jyh's avatar
jyh committed
    "cfg = Config.fromfile('../configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py')"
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1y2oV5w97jQo"
   },
   "source": [
    "Since the given config is used to train PSPNet on the cityscapes dataset, we need to modify it accordingly for our new dataset.  "
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "6195217b-187f-4675-994b-ba90d8bb3078"
   },
   "source": [
    "from mmseg.apis import set_random_seed\n",
    "\n",
    "# Since we use only one GPU, BN is used instead of SyncBN\n",
    "cfg.norm_cfg = dict(type='BN', requires_grad=True)\n",
    "cfg.model.backbone.norm_cfg = cfg.norm_cfg\n",
    "cfg.model.decode_head.norm_cfg = cfg.norm_cfg\n",
    "cfg.model.auxiliary_head.norm_cfg = cfg.norm_cfg\n",
    "# modify num classes of the model in decode/auxiliary head\n",
    "cfg.model.decode_head.num_classes = 8\n",
    "cfg.model.auxiliary_head.num_classes = 8\n",
    "\n",
    "# Modify dataset type and path\n",
    "cfg.dataset_type = 'StanfordBackgroundDataset'\n",
    "cfg.data_root = data_root\n",
    "\n",
    "cfg.data.samples_per_gpu = 8\n",
    "cfg.data.workers_per_gpu=8\n",
    "\n",
    "cfg.img_norm_cfg = dict(\n",
    "    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\n",
    "cfg.crop_size = (256, 256)\n",
    "cfg.train_pipeline = [\n",
    "    dict(type='LoadImageFromFile'),\n",
    "    dict(type='LoadAnnotations'),\n",
    "    dict(type='Resize', img_scale=(320, 240), ratio_range=(0.5, 2.0)),\n",
    "    dict(type='RandomCrop', crop_size=cfg.crop_size, cat_max_ratio=0.75),\n",
    "    dict(type='RandomFlip', flip_ratio=0.5),\n",
    "    dict(type='PhotoMetricDistortion'),\n",
    "    dict(type='Normalize', **cfg.img_norm_cfg),\n",
    "    dict(type='Pad', size=cfg.crop_size, pad_val=0, seg_pad_val=255),\n",
    "    dict(type='DefaultFormatBundle'),\n",
    "    dict(type='Collect', keys=['img', 'gt_semantic_seg']),\n",
    "]\n",
    "\n",
    "cfg.test_pipeline = [\n",
    "    dict(type='LoadImageFromFile'),\n",
    "    dict(\n",
    "        type='MultiScaleFlipAug',\n",
    "        img_scale=(320, 240),\n",
    "        # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],\n",
    "        flip=False,\n",
    "        transforms=[\n",
    "            dict(type='Resize', keep_ratio=True),\n",
    "            dict(type='RandomFlip'),\n",
    "            dict(type='Normalize', **cfg.img_norm_cfg),\n",
    "            dict(type='ImageToTensor', keys=['img']),\n",
    "            dict(type='Collect', keys=['img']),\n",
    "        ])\n",
    "]\n",
    "\n",
    "\n",
    "cfg.data.train.type = cfg.dataset_type\n",
    "cfg.data.train.data_root = cfg.data_root\n",
    "cfg.data.train.img_dir = img_dir\n",
    "cfg.data.train.ann_dir = ann_dir\n",
    "cfg.data.train.pipeline = cfg.train_pipeline\n",
    "cfg.data.train.split = 'splits/train.txt'\n",
    "\n",
    "cfg.data.val.type = cfg.dataset_type\n",
    "cfg.data.val.data_root = cfg.data_root\n",
    "cfg.data.val.img_dir = img_dir\n",
    "cfg.data.val.ann_dir = ann_dir\n",
    "cfg.data.val.pipeline = cfg.test_pipeline\n",
    "cfg.data.val.split = 'splits/val.txt'\n",
    "\n",
    "cfg.data.test.type = cfg.dataset_type\n",
    "cfg.data.test.data_root = cfg.data_root\n",
    "cfg.data.test.img_dir = img_dir\n",
    "cfg.data.test.ann_dir = ann_dir\n",
    "cfg.data.test.pipeline = cfg.test_pipeline\n",
    "cfg.data.test.split = 'splits/val.txt'\n",
    "\n",
    "# We can still use the pre-trained Mask RCNN model though we do not need to\n",
    "# use the mask branch\n",
    "cfg.load_from = 'checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'\n",
    "\n",
    "# Set up working dir to save files and logs.\n",
    "cfg.work_dir = './work_dirs/tutorial'\n",
    "\n",
    "cfg.runner.max_iters = 200\n",
    "cfg.log_config.interval = 10\n",
    "cfg.evaluation.interval = 200\n",
    "cfg.checkpoint_config.interval = 200\n",
    "\n",
    "# Set seed to facitate reproducing the result\n",
    "cfg.seed = 0\n",
    "set_random_seed(0, deterministic=False)\n",
    "cfg.gpu_ids = range(1)\n",
    "\n",
    "# Let's have a look at the final config used for training\n",
    "print(f'Config:\\n{cfg.pretty_text}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "QWuH14LYF2gQ"
   },
   "source": [
    "### Train and Evaluation"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    "outputId": "422219ca-d7a5-4890-f09f-88c959942e64"
   },
   "source": [
    "from mmseg.datasets import build_dataset\n",
    "from mmseg.models import build_segmentor\n",
    "from mmseg.apis import train_segmentor\n",
    "\n",
    "\n",
    "# Build the dataset\n",
    "datasets = [build_dataset(cfg.data.train)]\n",
    "\n",
    "# Build the detector\n",
    "model = build_segmentor(\n",
    "    cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))\n",
    "# Add an attribute for visualization convenience\n",
    "model.CLASSES = datasets[0].CLASSES\n",
    "\n",
    "# Create work_dir\n",
    "mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n",
    "train_segmentor(model, datasets, cfg, distributed=False, validate=True, \n",
    "                meta=dict())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DEkWOP-NMbc_"
   },
   "source": [
    "Inference with trained model"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 645
    "outputId": "1437419c-869a-4902-df86-d4f6f8b2597a"
   },
   "source": [
    "img = mmcv.imread('iccv09Data/images/6000124.jpg')\n",
    "\n",
    "model.cfg = cfg\n",
    "result = inference_segmentor(model, img)\n",
    "plt.figure(figsize=(8, 6))\n",
    "show_result_pyplot(model, img, result, palette)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "MMSegmentation Tutorial.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
jyh's avatar
jyh committed
   "language": "python",
jyh's avatar
jyh committed
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  },
  "pycharm": {
   "stem_cell": {
    "cell_type": "raw",
    "metadata": {
     "collapsed": false
    },
    "source": []
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2