Sie sind hier: Startseite Publications

Publications

2024

  • Gemein, L. A., Schirrmeister, R. T., Boedecker, J., & Ball, T. (2024). Brain Age Revisited: Investigating the State vs. Trait Hypotheses of EEG-derived Brain-Age Dynamics with Deep Learning. Imaging Neuroscience. PDF
  • Hoffmann, J., Fernandez, D., Brosseit, J., Bernhard, J., Esterle, K., Werling, M., Karg, M., & Boedecker, J. (2024). PlanNetX: Learning an Efficient Neural Network Planner from MPC for Longitudinal Control. Revision for 6th Annual Learning for Dynamics & Control Conference (L4DC 2024). arXiv
  • Kalweit, G., Klett, A., Naouar, M., Rahnfeld, J., Vogt, Y., Ramirez, D. L. I., Berger, R., Afonso, J. D., Hartmann, T. N., Follo, M.,  Luebbert, M., Mertelsmann, R., Ullrich, E., Boedecker, J. & Kalweit, M. Unsupervised Feature Extraction from a Foundation Model Zoo for Cell Similarity Search in Oncological Microscopy Across Devices. In ICML 2024 Workshop on Foundation Models in the Wild. PDF
  • Kiessner, A. K., Schirrmeister, R. T., Boedecker, J., & Ball, T. (2024). Reaching the ceiling? Empirical scaling behaviour for deep EEG pathology Classification. Computers in Biology and Medicine, 108681. PDF
  • Rahnfeld, J., Naouar, M., Kalweit, G., Boedecker, J., Dubruc, E., & Kalweit, M. (2024). A Comparative Study of Explainability Methods for Whole Slide Classification of Lymph Node Metastases using Vision Transformers. arXiv
  • Schneider, M., Krug, R., Vaskevicius, N., Palmieri, L. And Boedecker, J. (2024). The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning. 38th Conference on Neural Information Processing Systems (NeurIPS 2024).
  • Wang, J., Li, Y., Zhang, Y., Pan, W., & Kaski, S. (2024). Open Ad Hoc Teamwork with Cooperative Game Theory. International Conference on Machine Learning. arXiv
  • Zhang, Y., Deekshith, U., Wang, J., & Boedecker, J. LCPPO: An Efficient Multi-agent Reinforcement Learning Algorithm on Complex Railway Network. In 34th International Conference on Automated Planning and Scheduling. PDF
  • Zhang, Y., Hoffmann, J., & Boedecker, J. (2024). UDUC: An Uncertainty-driven Approach for Learning-based Robust Control. arXiv
  • Zhang, B., Zhang, Y., Frison, L., Brox, T., & Bödecker, J. (2024). Constrained Reinforcement Learning with Smoothed Log Barrier Function. arXiv preprint arXiv:2403.14508. arXiv
  • Zhu, H., De La Cropme, B., Kalweit, G., Schneider, A., Kalweit, M., Diester, I., & Boedecker, J. (2024). Multi-intention Inverse Q-learning for Interpretable Behavior Representation. Transactions on Machine Learning Research. OpenReview

2023

  • Dreissig, M., Piewak, F., & Boedecker, J. (2023). On the Calibration of Uncertainty Estimation in LiDAR-based Semantic Segmentation. In 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC) (pp. 4798-4805). IEEE. PDF
  • Dreissig, M., Scheuble, D., Piewak, F., & Boedecker, J. (2023). Survey on lidar perception in adverse weather conditions. In 2023 IEEE Intelligent Vehicles Symposium (IV) (pp. 1-8). IEEE. PDF
  • Dorka, N., Welschehold, T., Boedecker, J., & Burgard, W. (2023). Adaptively calibrated critic estimates for deep reinforcement learning. IEEE Robotics and Automation Letters, 8(2), 624-631. PDF
  • Ghezzi, A., Hoffman, J., Frey, J., Boedecker, J., & Diehl, M. (2023). Imitation Learning from Nonlinear MPC via the Exact Q-Loss and its Gauss-Newton Approximation. In 2023 62nd IEEE Conference on Decision and Control (CDC) (pp. 4766-4771). IEEE. PDF
  • Guttikonda, S., Achterhold, J., Li, H., Boedecker, J., & Stueckler, J. (2023). Context-Conditional Navigation with a Learning-Based Terrain-and Robot-Aware Dynamics Model. In 2023 European Conference on Mobile Robots (ECMR) (pp. 1-7). IEEE.  PDF
  • von Hartz, J. O., Chisari, E., Welschehold, T., Burgard, W., Boedecker, J., & Valada, A. (2023). The Treachery of Images: Bayesian Scene Keypoints for Deep Policy Learning in Robotic Manipulation. In: IEEE Robotics and Automation LettersPDF
  • Kalweit, M., Burden, A. M., Boedecker, J., Hügle, T., & Burkard, T. (2023). Patient groups in Rheumatoid arthritis identified by deep learning respond differently to biologic or targeted synthetic DMARDs. PLOS Computational Biology, 19(6), e1011073. PDF
  • Kiessner, A. K., Schirrmeister, R. T., Gemein, L. A., Boedecker, J., & Ball, T. (2023). An extended clinical EEG dataset with 15,300 automatically labelled recordings for pathology decoding. NeuroImage: Clinical, 39, 103482. PDF
  • Mirchevska, B., Werling, M., & Boedecker, J. (2023). Optimizing trajectories for highway driving with offline reinforcement learning. Frontiers in Future Transportation, 4, 1076439. PDF
  • Naouar, M., Kalweit, G., Klett, A., Vogt, Y., Silvestrini, P., Ramirez, D. L. I., Mertelsmann, R., Boedecker, J., & Kalweit, M. (2023). CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous Cell Populations. NeurIPS 2023 Workshop on Medical Imaging. arXiv
  • Naouar, M., Kalweit, G., Mastroleo, I., Poxleitner, P., Metzger, M., Boedecker, J., & Kalweit, M. (2023). Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles. Preprint arXiv
  • Reiter, R., Hoffmann, J., Boedecker, J., & Diehl, M. (2023, June). A hierarchical approach for strategic motion planning in autonomous racing. In 2023 European Control Conference (ECC) (pp. 1-8). IEEE. PDF
  • Schmidt-Barbo, P., Kalweit, G., Naouar, M., Paschold, L., Willscher, E., Schultheiß, C., Märkl, B., Dirnhofer,  S., Tzankov, A., Binder, M., & Kalweit, M. (2023). Detection of disease-specific signatures in B1 cell repertoires of lymphomas using machine learning. arXiv
  • Vogt, Y., Naouar, M., Kalweit, M., Miething, C. C., Duyster, J., Mertelsmann, R., and Kalweit, G. & Boedecker, J. (2023). Stable Online and Offline Reinforcement Learning for Antibody CDRH3 Design. NeurIPS 2023 Workshop on Machine Learning in Structural Biology. arXiv.
  • Yan, S., Zhang, Y., Zhang, B., Boedecker, J., & Burgard, W. (2023). Geometric Regularity with Robot Intrinsic Symmetry in Reinforcement Learning. IEEE International Conference on Robotics and Automation (ICRA). arXiv
  • Zhang, Y., Boedecker, J., Li, C., & Zhou, G. (2023). Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Adaptive Control in Autonomous Driving. arXiv
  • Zhang, Y., Wang, J., & Boedecker, J. (2023).  Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization. Accepted at CoRL 2023PDF
  • Zhu, H., De La Crompe, B., Kalweit, G., Schneider, A., Kalweit, M., Diester, I., & Boedecker, J. (2023). Multi-intention Inverse Q-learning for Interpretable Behavior Representation. arXiv

2022

    • Borja-Diaz, J., Mees, O., Kalweit, G., Hermann, L., Boedecker, J., & Burgard, W. (2022). Affordance learning from play for sample-efficient policy learning. In 2022 International Conference on Robotics and Automation (ICRA) (pp. 6372-6378). IEEE. PDF
    • Chisari, E., Welschehold, T., Boedecker, J., Burgard, W., & Valada, A. (2022). Correct me if i am wrong: Interactive learning for robotic manipulation. IEEE Robotics and Automation Letters, 7(2), 3695-3702.  PDF
    • Kalweit, G., Kalweit, M., Alyahyay, M., Jaeckel, Z., Steenbergen, F., Hardung, S., Diester, I., & Boedecker, J. (2022). NeuRL: Closed-form Inverse Reinforcement Learning for Neural Decoding. Accepted at ICML 2021 Workshop on Computational Biology arXiv
    • Kalweit, G., Kalweit, M., & Boedecker, J. (2022). Robust and Data-efficient Q-learning by Composite Value-estimation. Transactions on Machine Learning Research 2022.  PDF
    • Kalweit, M., Kalweit, G., Werling, M., & Boedecker, J. (2022). Deep surrogate Q-learning for autonomous driving. In 2022 International Conference on Robotics and Automation (ICRA) (pp. 1578-1584). IEEE.  PDF 
    • Rosete-Beas, E., Mees, O., Kalweit, G., Boedecker, J., & Burgard, W. (2022). Latent plans for task-agnostic offline reinforcement learning. In Conference on Robot Learning (pp. 1838-1849). PMLR. PDF

    2021

    • Kalweit, G., Huegle, M., Werling, M., & Boedecker, J. (2021). Q-learning with long-term action-space shaping to model complex behavior for autonomous lane changes. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5641-5648). IEEE. PDF
    • Kalweit, M., Kalweit, G., & Boedecker, J. (2021). AnyNets: Adaptive Deep Neural Networks for Medical Data with Missing Values. In IJCAI 2021 Workshop on Artificial Intelligence for Function, Disability, and Health (pp. 12-21). PDF
    • Kalweit, M., Walker, U. A., Finckh, A., Müller, R., Kalweit, G., Scherer, A., Boedecker, J., & Hügle, T. (2021). Personalized prediction of disease activity in patients with rheumatoid arthritis using an adaptive deep neural network. PLoS One, 16(6), e0252289. PDF
    • Mirchevska, B., Hügle, M., Kalweit, G., Werling, M., & Boedecker, J. (2021). Amortized Q-learning with model-based action proposals for autonomous driving on highways. In 2021 IEEE international conference on robotics and automation (ICRA) (pp. 1028-1035). IEEE. PDF
    • Ranjbar, A., Vien, N. A., Ziesche, H., Boedecker, J., & Neumann, G. (2021). Residual feedback learning for contact-rich manipulation tasks with uncertainty. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2383-2390). IEEE. PDF

    2020

    • Frison, L., Paul, S., Koller, T., Fischer, D., Frison, G., Boedecker, J., & Engelmann, P. (2020). Hardware-in-the-loop test of learning-based controllers for grid-supportive building heating operation. IFAC-PapersOnLine, 53(2), 17107-17112.
    • Gemein, L. A., Schirrmeister, R. T., Chrabąszcz, P., Wilson, D., Boedecker, J., Schulze-Bonhage, A., Hutter, F., & Ball, T. (2020). Machine-learning-based diagnostics of EEG pathology. NeuroImage, 220, 117021.
    • Hügle, M., Kalweit, G., Hügle, T., Boedecker, J. (2020). A Dynamic Deep Neural Network For Multimodal Clinical Data Analysis. AAAI 2020 Workshop on Health Intelligence. Explainable AI in Healthcare and Medicine. Studies in Computational Intelligence, Springer. arXiv
    • Hügle, M., Kalweit, G., Werling, M., & Boedecker, J. (2020). Dynamic interaction-aware scene understanding for reinforcement learning in autonomous driving. In 2020 IEEE international conference on robotics and automation (ICRA) (pp. 4329-4335). IEEE.
    • Hügle, M., Omoumi, P., van Laar, J. M., Boedecker, J., & Hügle, T. (2020). Applied machine learning and artificial intelligence in rheumatology. Rheumatology advances in practice, 4(1), rkaa005.
    • Kalweit, G., Huegle, M., Werling, M., & Boedecker, J. (2020). Deep constrained q-learning. arXiv
    • Kollmitz, M., Koller, T., Boedecker, J., & Burgard, W. (2020). Learning human-aware robot navigation from physical interaction via inverse reinforcement learning. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 11025-11031). IEEE.

    2019

    • Abou-Hussein, M., Müller, S. H., & Boedecker, J. (2019). Multimodal spatio-temporal information in end-to-end networks for automotive steering prediction. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 8641-8647). IEEE.
    • Huegle, M., Kalweit, G., Mirchevska, B., Werling, M., & Boedecker, J. (2019). Dynamic input for deep reinforcement learning in autonomous driving. In 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 7566-7573). IEEE.
    • Huegle, M., Kalweit, G., Werling, M., & Boedecker, J. (2019).
    • Koller, T., Berkenkamp, F., Turchetta, M., Boedecker, J., and Krause, A. (2019). Learning-based Model Predictive Control for Safe Reinforcement Learning. Extended abstract at RSS 2019 Workshop on Robust Autonomy.
    • Kuhner, D., Fiederer, L. D. J., Aldinger, J., Burget, F., Völker, M., Schirrmeister, R. T., Do, C., Boedecker, J., Nebel, B., Ball, T., & Burgard, W. (2019). A service assistant combining autonomous robotics, flexible goal formulation, and deep-learning-based brain–computer interfacing. Robotics and Autonomous Systems, 116, 98-113.
    • Wülfing, J. M., Kumar, S. S., Boedecker, J., Riedmiller, M., & Egert, U. (2019). Adaptive long-term control of biological neural networks with deep reinforcement learning. Neurocomputing, 342, 66-74.
    • Zhang, J., Tai, L., Yun, P., Xiong, Y., Liu, M., Boedecker, J., & Burgard, W. (2019). Vr-goggles for robots: Real-to-sim domain adaptation for visual control. IEEE Robotics and Automation Letters, 4(2), 1148-1155.
    • Zhang, J., Wetzel, N., Dorka, N., Boedecker, J., & Burgard, W. (2019). Scheduled intrinsic drive: A hierarchical take on intrinsically motivated exploration.

    2018

    • Dürichen, R., Verma, K. D., Yee, S. Y., Rocznik, T., Schmidt, P., Bödecker, J., & Peters, C. (2018). Prediction of electrocardiography features points using seismocardiography data: a machine learning approach. In Proceedings of the 2018 ACM International Symposium on Wearable Computers (pp. 96-99).
    • Heller, S., Hügle, M., Nematollahi, I., Manzouri, F., Dümpelmann, M., Schulze-Bonhage, A., Boedecker, J. & Woias, P. (2018). Hardware implementation of a performance and energy-optimized convolutional neural network for seizure detection. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 2268-2271). IEEE.
    • Hügle, M., Heller, S., Watter, M., Blum, M., Manzouri, F., Dümpelmann, M., A., Woias, P., & Boedecker, J. (2018).
    • Mirchevska, B., Pek, C., Werling, M., Althoff, M., & Boedecker, J. (2018). High-level decision making for safe and reasonable autonomous lane changing using reinforcement learning. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC) (pp. 2156-2162). IEEE. PDF
    • Wülfing, J., Kumar, S. S., Boedecker, J., Riedmiller, M. A., & Egert, U. (2018).

    2017

    • Burget, F., Fiederer, L.D.J., Kuhner, D., Völker, M., Aldinger, J., Schirrmeister, R.T., Do, C., Boedecker, J., Nebel, B., Ball, T. and Burgard, W. (2017) Acting thoughts: Towards a mobile robotic service assistant for users with limited communication skills. In Mobile Robots (ECMR), 2017 European Conference on (pp. 1-6). IEEE. PDF
    • Groß, W., Lange, S., Bödecker, J., & Blum, M. (2017). Predicting Time Series with Space-Time Convolutional and Recurrent Neural Networks. Proc. of the 25th ESANN: 71-76. PDF
    • Kalweit, G., & Boedecker, J. (2017). Uncertainty-driven imagination for continuous deep reinforcement learning. In Conference on robot learning (pp. 195-206). PMLR. PDF
    • Mirchevska, B., Blum, M., Louis, L., Boedecker, J., & Werling, M. (2017).Reinforcement Learning for Autonomous Maneuvering in Highway Scenarios. In Workshop for Driving Assistance Systems and Autonomous Driving (pp. 32-41). PDF
    • Zhang, J., Springenberg, J. T., Boedecker, J., & Burgard, W. (2017). Deep reinforcement learning with successor features for navigation across similar environments. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2371-2378). IEEE. PDF
    • Zhang, J., Tai, L., Liu, M., Boedecker, J., & Burgard, W. (2017). Neural slam: Learning to explore with external memory. 

    2016

    • Heller, S., Kroener, M., Woias, P., Donos, C., Manzouri, F., Lachner-Piza, D., Schulze-Bonhage, A., Duempelmann, M., Blum, M., & Boedecker, J. (2016). On the way to a self-sufficient closed-loop implant for early seizure detection. Biomedical Engineering/Biomedizinische Technik 61, no. s1: 133-136.
    • Kumar, S. S., Wülfing, J., Okujeni, S., Boedecker, J., Riedmiller, M., & Egert, U. (2016). Autonomous Optimization of Targeted Stimulation of Neuronal Networks. PLoS computational biology, 12(8), e1005054. web
    • Springenberg, J. T., Wilmes, K. A., & Boedecker, J. (2016). Towards Local Learning and MCMC Inference in Biologically Plausible Deep Generative Networks. In NIPS Workshop Brains and Bits: Neuroscience Meets Machine Learning. PDF

    2015

    • Böhmer, W., Springenberg, J. T., Boedecker, J., Riedmiller, M., & Obermayer, K. (2015). Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations. KI - Künstliche Intelligenz pp. 1-10. Springer Berlin Heidelberg. doi web
    • Watter, M., Springenberg, J., Boedecker, J., & Riedmiller, M. (2015). Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images. In Advances in Neural Information Processing Systems 28. pp. 2728–2736. PDF

    2014

    • Boedecker, J., Springenberg, J. T., Wülfing, J., & Riedmiller, M. (2014). Approximate real-time optimal control based on sparse gaussian process models. In 2014 IEEE symposium on adaptive dynamic programming and reinforcement learning (ADPRL) (pp. 1-8). IEEE.  PDF
    • Kumar, S. S., Wülfing, J., Boedecker, J., Wimmer, R., Riedmiller, M., Becker, B., & Egert, U. (2014, July). Autonomous control of network activity. In Proc. of the 9th Int’l Meeting on Substrate-Integrated Microelectrode Arrays (MEA). PDF
    • Obst, O., Boedecker, J. (2014) Guided Self-Organization of Input-Driven Recurrent Neural Networks. In Guided Self-Organization: Inception. pp. 319-340. Springer Berlin Heidelberg. doi web

    2013

    • Boedecker, J., Lampe, T., & Riedmiller, M. (2013). Modeling effects of intrinsic and extrinsic rewards on the competition between striatal learning systems. Frontiers in psychology, 4, 61581. doi web
    • Obst, O., Boedecker, J., Schmidt, B., & Asada, M. (2013). On active information storage in input-driven systems. web

    2012

    • Boedecker, J., Obst, O., Kashima, Y., & Asada, M. (March 29-30 2012) Intrinsic computational capabilities of reservoir computing networks in different dynamics regimes and their relation to task performance. Lyon, France.
    • Boedecker, J., Obst, O., Lizier, J. T., Mayer,, N. M., &  Asada, M. (2012) Information processing in echo state networks at the edge of chaos.. Theory in Biosciences 131 (3) pp. 205–213. Springer Berlin / Heidelberg. doi web
    • Hartmann, C., Boedecker, J., Obst, O., Ikemoto, S., & Asada, M. (2012). Real-Time Inverse Dynamics Learning for Musculoskeletal Robots based on Echo State Gaussian Process Regression. In Proceedings of Robotics: Science and Systems. Sydney, Australia.

    2011

    • Grzyb, B. J., Boedecker, J., Asada, M., & del Pobil, A. P. (September 2011). Elevated activation of dopaminergic brain areas facilitates behavioral state transition. In IROS 2011 Workshop on Cognitive Neuroscience Robotics.
    • Grzyb, B. J., Boedecker, J., Asada, M., del Pobil, A. P., & Smith, L. B. (2011). Between Frustration and Elation: Sense of Control Regulates the Intrinsic Motivation for Motor Learning.
    • Grzyb, B. J., Boedecker, J., Asada, M., del Pobil, A. P., & Smith, L. B. (2011). Trying anyways: how ignoring the errors may help in learning new skills. In First Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics.
    • Boedecker, J. (2011) Echo State Network Reservoir Shaping and Information Dynamics at the Edge of Chaos. Osaka, Japan.

    2010

    • Obst, O., Boedecker, J., & Asada, M. (2010) Improving Recurrent Neural Network Performance Using Transfer Entropy. In Neural Information Processing Models and Applications. pp. 193–200. Springer. web

    2009

    • Boedecker, J., Obst, O., Mayer, N. M., &  Asada, M., (2009) Initialization and Self-Organized Optimization of Recurrent Neural Network Connectivity. HFSP Journal 3 (5) pp. 340–349. doi web
    • Boedecker, J., Obst, O., Mayer, N. M., &  Asada, M., (2009) Studies on Reservoir Initialization and Dynamics Shaping in Echo State Networks. In Proceedings of the 17th European Symposium On Artificial Neural Networks ({ESANN}'09). pp. 227–232. D-Side Publications. Evere, Belgium. web 
    • Mayer, N. M., Boedecker, J., & Asada, M. (2009) Robot motion description and real-time management with the Harmonic Motion Description Protocol. Robotics and Autonomous Systems 57 (8) pp. 870-876. web

    2008

    • Boedecker, J., & Asada, M. (2008) SimSpark – Concepts and Application in the 3D Soccer Simulation League. In Workshop on The Universe of RoboCup Simulators at SIMPAR 2008. web
    • da Silva Guerra, R., Boedecker, J., Mayer, N., Yanagimachi, S., Hirosawa, Y., Yoshikawa, K.,Namekawa, M., & Asada, M. (2008). Introducing physical visualization sub-league. In RoboCup 2007: Robot Soccer World Cup XI 11 (pp. 496-503). Springer Berlin Heidelberg.
    • Mayer, N. M., Boedecker, J., Masui, K., Ogino, M., & Asada, M. (2008). HMDP: A new protocol for motion pattern generation towards behavior abstraction. In RoboCup 2007: Robot Soccer World Cup XI 11 (pp. 184-195). Springer Berlin Heidelberg. web

    2007

    • da Silva Guerra, R., Boedecker, J., & Asada, M. (2007) Physical Visualization Sub-League: A New Platform for Research and Edutainment. pp. 15–20.
    • da Silva Guerra, R., Boedecker, J., Yanagimachi, S., & Asada, M. (2007) Introducing a New Minirobotics Platform for Research and Edutainment. In Proceedings of the 4th International Symposium on Autonomous Minirobots for Research and Edutainment.
    • da Silva Guerra, R., Boedecker, J., Mayer, N. M., Yanagimachi, S., Ishiguro, H., & Asada, M. (2007) A new minirobotics system for teaching and researching agent-based programming. In CATE '07: Proceedings of the 10th IASTED International Conference on Computers and Advanced Technology in Education. pp. 39–44. ACTA Press. Anaheim, CA, USA.
    • Mayer, N. M., Boedecker, J., & Asada, M. (2007) On Standardization in the RoboCup Soccer Humanoids Leagues. web
    • Mayer, N. M., Boedecker, J., da Silva Guerra, R., Obst, O., & Asada, M. (2007). 3D2Real: Simulation league finals in real robots. In RoboCup 2006: Robot Soccer World Cup X 10 (pp. 25-34). Springer Berlin Heidelberg. web

    2006

    • Asada, M., Mayer, N. M., Boedecker, J., Ogino, M., Fuke, S. (2006) The RoboCup Soccer Humanoid League: Overview and Outlook. web Bibtex
    • Boedecker, J., Mayer, N. M., Ogino, M., da Silva Guerra, R., Kikuchi, M., & Asada, M. (2006) Getting closer: How Simulation and Humanoid League can benefit from each other. In Proceedings of the 3rd International Symposium on Autonomous Minirobots for Research and Edutainment (AMiRE 2005). pp. 93-98. Springer. web
    • Obst, O., & Boedecker, J., (2006) Flexible Coordination of Multiagent Team Behavior using HTN Planning. In RoboCup 2005: Robot Soccer World Cup IX. pp. 521-528. Springer. web Bibtex

    2005

    • Obst, O., Maas, A., & Boedecker, J. (Jul 2005) HTN Planning for Flexible Coordination Of Multiagent Team Behavior. pp. 87–94. Edinburgh, Scotland.