Concepedia

TLDR

The Segment Anything Model (SAM) is widely used for image segmentation, yet it performs poorly on medical images because it lacks domain‑specific knowledge. The study aims to improve SAM’s performance on medical images. The authors introduce Med‑SA, a lightweight adapter that applies a Space‑Depth Transpose to convert SAM from 2D to 3D and a Hyper‑Prompting Adapter to condition the model on medical prompts, and evaluate it on 17 segmentation tasks across multiple modalities. Med‑SA outperforms several state‑of‑the‑art methods while updating only 2 % of parameters. Code is available at https://github.com/KidsWithTokens/Medical‑SAM‑Adapter.

Abstract

The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation due to its impressive capabilities in various segmentation tasks and its prompt-based interface. However, recent studies and individual experiments have shown that SAM underperforms in medical image segmentation, since the lack of the medical specific knowledge. This raises the question of how to enhance SAM's segmentation capability for medical images. In this paper, instead of fine-tuning the SAM model, we propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model using a light yet effective adaptation technique. In Med-SA, we propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned adaptation. We conduct comprehensive evaluation experiments on 17 medical image segmentation tasks across various image modalities. Med-SA outperforms several state-of-the-art (SOTA) medical image segmentation methods, while updating only 2\% of the parameters. Our code is released at https://github.com/KidsWithTokens/Medical-SAM-Adapter.