Organizational Unit

Hessisches Ministerium für Wissenschaft und Kunst

Loading...
OrgUnit Logo

Date established

City

Wiesbaden

Country

Publications

Description

Now showing 1 - 10 of 25
  • Research DataOpen Access
    Modeling Reward Learning Under Placebo Expectancies: A Q-Learning Approach
    (2022-05-10) Augustat, Nick; Müller, Erik Malte; Endres, Dominik; Chuang, Li-Ching; Panitz, Christian; Stolz, Christopher
  • Research DataOpen Access
    Activity classification models
    (Jannis Gottwald) Gottwald, Jannis
  • Research DataRestricted
    Nature 4.0: A networked sensor system for integrated biodiversity monitoring
    Bald, Lisa; Zeuss, Dirk; Frieß, Nicolas; Wöllauer, Stephan; Kohlbrecher, Viviane; Lindner, Kim; Farwig, Nina
  • Research DataOpen Access
    Test data for tRackIT R-Package
    (Jannis Gottwald) Gottwald, Jannis
  • Research DataOpen Access
    Radiotracking campaign 2019 - raw data
    Frieß, Nicolas; Farwig, Nina; Nauss, Thomas; Reudenbach, Christoph; Quillfeldt, Petra; Gottwald, Jannis; Rösner, Sascha; Lindner, Kim; Masello, Juan F.; Ludwig, Marvin
  • Thumbnail Image
    Research DataOpen Access
    Distracted by Previous Experience: Integrating Selection History, Current Task Demands and Saliency in an Algorithmic Model- Springer2024
    (Philipps-Universität marburg, 07.05.2021) Endres, Dominik; Meibodi, Neda; Meibodi, Neda
    This repository contains the files and data necessary to recreate the results from the paper Meibodi, N., Abbasi, H., Schubö, A. et al. Distracted by Previous Experience: Integrating Selection History, Current Task Demands and Saliency in an Algorithmic Model. Comput Brain Behav 7, 268–285 (2024). https://doi.org/10.1007/s42113-024-00197-6 Please go to Version 2 if you are interested to see the files related to the other pubplication N.Meibodi, H.Abbasi, A. Schuboe, D. Endres (2021) A Model of Selection History in Visual Attention, Proceedings of the 2021 Conference of the Society of Cognitive Science, Vienna, Austria.
  • Research DataOpen Access
    Identifying and Counting Avian Blood Cells in Whole Slide Images via Deep Learning
    Vogelbacher, Markus; Strehmann, Finja; Bellafkir, Hicham; Mühling, Markus; Korfhage, Nikolaus; Schneider, Daniel; Rösner, Sascha; Schabo, Dana G.; Farwig, Nina; Freisleben, Bernd
  • Thumbnail Image
    Research DataOpen Access
    Integration of optic flow into the sky compass network in the brain of the desert locust (Frontiers Version)
    Zittrell, Frederick; Pabst, Kathrin; Carlomagno, Elena; Rosner, Ronny; Pegel, Uta; Endres, Dominik; Homberg, Uwe
  • Research DataOpen Access
    Recognition of European mammals and birds in camera trap images using deep neural networks
    (Philipps-Universität Marburg) Schneider, Daniel; Lindner, Kim; Vogelbacher, Markus; Bellafkir, Hicham; Mühling, Markus; Farwig, Nina; Freisleben, Bernd
    This record contains the trained models and the test data sets presented in the papers "Recognizing European mammals and birds in camera trap images using convolutional neural networks" (Schneider et al, 2023) and "Recognition of European mammals and birds in camera trap images using deep neural networks" (Schneider et al., 2024). In these publications, we present deep neural network models to recognize both mammal and bird species in camera trap images. In the archive files "model2023_ConvNextBase.tar" and "model2023_EfficientNetV2.tar" as well as "model2024_ConvNextBase_species.tar" and "model2024_ConvNextBase_taxonomy.tar" we provide downloads of the best trained models from our 2023 and 2024 papers, respectively. All models are provided in the Tensorflow2 SavedModel format (https://www.tensorflow.org/guide/saved_model). A script to load and run the models can be found in our Git-Repository: https://github.com/umr-ds/Marburg-Camera-Traps. There we also provide a code snippet to perform predictions with these models. In the archive files "data_MOF.tar" and "data_BNP.tar", we provide downloads for our Marburg Open Forest (MOF) and Białowieża National Park (BNP) data sets, consisting of about 2,500 and 15,000 labeled camera trap images, respectively. The files contain two folders named "img" and "md", respectively. The "img" folder contains the images grouped in subfolders by recording date and camera trap id. The "md" folder contains the metadata for each image, which constists of the bounding box detections obtained using the MegaDetector model (https://github.com/agentmorris/MegaDetector). The metadata is grouped into yaml-files for each label at different taxonomic levels.