Fotoherkenning Paddenstoelen: een vloek of een zegen?

Fotoherkenning Paddenstoelen: een vloek of een zegen? Coolia 2020(3)

In recent years there has been an explosion in the availability of apps for smartphones that can be
used to help with mushroom identification in the field. There are a number of approaches available, ranging from those apps that identify mushroom automatically based on the use of Artificial Intelligence (AI) and automated Image Recognition, through those that require the user to use traditional dichotomous keys or multi-access keys, to those that may only have a range of images without a clear system for identification of any species of interest

The coolia article seems related to this article Artificial Intelligence for plant identification on smartphones and tablets













Related documents--------------------------------------------------------------------------------------
BACHELORARBEIT MAGIC MUSHROOM APP -Mit Deep Learning essbare Pilze erkennen met Python!!!
https://www.ntb.ch/fileadmin/NTB_Institute/ICE/projekte/MagicMushroom/JUNG_R._WAGNER_D._MagicMushroom_App-Pilzklassifikation_mit_CNNs.pdf

Deep Shrooms: classifying mushroom images
https://tuomonieminen.github.io/deep-shrooms/
https://github.com/TuomoNieminen/deep-shrooms (Python)
https://teekoivi.users.cs.helsinki.fi/
https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
https://www.youtube.com/watch?v=f6Bf3gl4hWY

Shroomnet: Kunstliches neuronales Netz f ¨ ur die Bestimmung von Pilzarten
https://www.obermeier.ch/wp-content/uploads/2018/12/ShroomNET_small.pdf !!

Artificial Intelligence for plant identification on smartphones and tablets
https://bsbi.org/wp-content/uploads/dlm_uploads/BSBI-News-144-pp34-40-plant-id-apps-final.pdf

https://web.plant.id/

TUOMAS NIEMINEN DEEP LEARNING IN QUANTIFYING VASCULAR BURDEN FROM BRAIN IMAGES
https://www.semanticscholar.org/paper/TUOMAS-NIEMINEN-DEEP-LEARNING-IN-QUANTIFYING-BURDEN-Eskola/aea24dc5822ac9f5af4801f9aaf9ab864cf23aea







Apps for identification mushrooms---------------------------------------------------------------------
Obsidentify
https://play.google.com/store/apps/details?id=org.observation.obsidentify

Deens svampeatlas
https://play.google.com/store/apps/details?id=com.noque.svampeatlas

Duits
https://play.google.com/store/apps/details?id=com.nastylion.pilz

iNaturalist Seek
https://play.google.com/store/apps/details?id=org.inaturalist.seek

Google Lens
https://play.google.com/store/apps/details?id=com.google.ar.lens

Posted on July 5, 2020 03:02 PM by optilete optilete

Comments

Nice artickle

Posted by ahospers over 3 years ago

Vision Model Updates

iNaturalist currently uses vision models in two main places:
1) a private web-based API used by the website and the iNaturalist iOS and Android apps, and
2) within the recently updated Seek app.

When Seek 2.0 was released in April, it included a different vision model than we were using on the web. At that time the web-based model was a third-generation model we started using in early 2018. That web-based model was trained with the idea it would be run on servers, and servers can be configured to have far more computing power than a mobile device. As a result that model was far too large to be run on mobile devices.

Early this year, with an updated Seek in mind, we started another training run with two main goals:
-shrinking the file size of the model, and
-allowing it to recommend taxonomic ranks other than species (e.g. families, genera, etc.).

Smaller
The mobile version of the model needs to be small in terms of file size to minimize the amount of data app users would need to download. Smaller models can also be used by more devices as they need fewer resources to run (e.g. memory, battery), and can generate results faster, which is important for Seek's real-time camera vision results. These models take a lot of time and money to train, so we also wanted a model that could be simultaneously trained to produce a large web-based version and a smaller version for use in mobile devices.

Unfortunately, shrinking the file size like this slightly decreased model accuracy compared to the larger web-based version (kind of similar to image compression), and we found that was an unavoidable tradeoff. We take this into account when processing the model results, and on average for a similar error rate, the mobile version might recommend a taxon at a higher taxonomic rank than the web-based version. The taxon results we show to users shouldn't be less accurate, but they may be less specific.

More Species Represented
We wanted the model to include more species data, even when some species don't have enough photos to be recognized as species level. There are some species with a small amount of photos that, if we trained on that small set of photos, likely wouldn't have enough information for the model to reliable recognize those species.

Our 2018 model only included taxa at rank species. We set a threshold for number of photos, and species below the threshold were not included. We could still recommend higher taxa by doing some post-processing of results, but the model itself would only assign scores to species. In our latest training run we allowed the photos from species under the threshold to be rolled up into their ancestor taxa until the threshold was reached, and we allowed the model to assign scores to these non-species nodes. This allows more species to be represented in this newer model, sometimes at the genus level mixed up with photos of other species in the genus under our threshold. Now instead of not knowing anything about these species, the model can at least identify the genus or family, etc.

https://www.inaturalist.org/blog/25510-vision-model-updates

Posted by optilete over 3 years ago

Bevat dit alle artikelen ? Je had toen ook een duitstalig artikel dacht ik

Posted by ahospers over 3 years ago

Met < hr > krrijg je een mooie line

Posted by ahospers over 3 years ago

Leuk stuk, zat net planten te kijken maar dat was geen goede test..deze is veel beter https://bsbi.org/wp-content/uploads/dlm_uploads/BSBI-News-144-pp34-40-plant-id-apps-final.pdf

Posted by ahospers about 3 years ago

Welke test/review is goed (url) en welke slecht (url)?

Posted by optilete about 3 years ago

https://zenodo.org/record/7050651#.Y2gH23bMJaT
pagina 11

'Gebruikte software en hardware
Het trainen van modellen in dit project is gedaan met
behulp van TensorFlow, het platform van Google voor
machine learning.5
De code is geschreven in Python 3,
gebruik makend van Keras, een open source-pakket
dat fungeert als interface voor TensorFlow.
Alle preprocessing van de data (downloaden,
bestanden lezen, etc.) is door de bij dit project
betrokken software-ontwikkelaar geschreven in
Python of in Bash-scripts, gebruik makend van
standaard Linux-tools. Dit geldt ook voor analyse en
presentatie van de resultaten.
Het experimenteel geautomatiseerd bijsnijden
van afbeeldingen vond plaats met ImageMagick.
Alle modellen maken gebruik van de InceptionV3-
architectuur. Er zijn tests gedraaid met andere
architecturen, zoals VGG16, ResNet50 en Xception.
Alle getrainde modellen zijn opgesla'

Posted by optilete over 1 year ago

Add a Comment

Sign In or Sign Up to add comments

Gracias al apoyo de:

¿Quiere apoyarnos? Pregúntenos cómo escribiendo a snib.guatemala@gmail.com