Replies: 6 comments 3 replies
-
Highly interesting! (@dlangenk you should take a look at this) We recently replaced the TensorFlow Mask R-CNN implementation of MAIA with a PyTorch Faster R-CNN implementation (from MMDetection). So the basic dependencies for SAM are already met. While we planned to replace Faster R-CNN with something better that is provided by MMDetection (which would be very easy), I'd be open to ditch MMDetection altogether if SAM is good enough. This PR can give you an idea which files must be touched if Faster R-CNN is to be replaced. It includes many insignificant changes in PHP files. The more relevant changes were done to the Python files. MAIA basically has one Python script to run the novelty detection, one to generate a training dataset (i.e. scaling images as part of the UnKnoT method), one to run training on Faster R-CNN and one to run inference. Each script "communicates" with the PHP backend via JSON input/output files. It shouldn't be too hard to replace the scripts. This needs careful evaluation, first, though. I see three possible applications for SAM in BIIGLE:
So many ideas! Currently, we don't have the resources to work on this ourselves. It would be awesome if you would consider to contribute! |
Beta Was this translation helpful? Give feedback.
-
I had a look at the demo and tried it with a few images and it looks pretty impressive. Also I guess Nr 3 is the most promising one at the moment, but then it would be nice to have the same or similar user interface support as in the demo. Regarding the annotation without label this could easily be handled creating another global label called "object" or something similar. Will play with this a little bit more after easter. With the demo I guess we can quickly evaluate the potential for cases 1 and 2 as well. |
Beta Was this translation helpful? Give feedback.
-
Brilliant! thank you guys for the quick reply, I will look into the PR #117 to learn how to connect it to the interface.
|
Beta Was this translation helpful? Give feedback.
-
Hi, after looking into the PR, if I understood everything correctly I think the quickest update would be 2) modifying the following 2 files: SAM does not come with a training script, so unless the pipeline actually makes use of the On the other hand 3) would be really interesting to look into, is there a suggested way to set up a test environment for a new feature? |
Beta Was this translation helpful? Give feedback.
-
I did some experiments with the segment everything part and I get not so good results at the moment. I created a script to download the images of a volume, apply the segmentAnything segment everything method and then upload the results to Biigle using Object as label. I attach the script here, but plan to make it available in the community-script repo later on. As I cannot attach py files here it is zipped additionally. I also tried different parameter sets, which changes the outcome quite a bit. |
Beta Was this translation helpful? Give feedback.
-
fyi: We are working on idea 3 and will release it soon. progress can be tracked in #584. |
Beta Was this translation helpful? Give feedback.
-
Hi,
Thanks for maintaining and hosting BIIGLE, it's an awesome project!
I have been playing with the new panoptic segmentation model SAM, and I was wondering if there was a way to contribute to MAIA, by swapping the Mask-RCNN to SAM.
Technically speaking my main question would be, how modular is the implementation of MAIA:
Possibly @mzur could give some insights from the #428 Hackathon? :)
Why would MAIA benefit from SAM?
The pipeline itself from SAM is also intersting because it has a signficant overlap with MAIA, as both are aiming to refine the model using the annotators' output in an iterative way. This, however, would require a bit more of a research how could it be integrated in MAIA.
Thanks,
Csabi
Beta Was this translation helpful? Give feedback.
All reactions