#REQUEST.pageInfo.pagedescription#

Site Navigation

COMP9069 - Robotics & Autonomous Systems

banner1
Title:Robotics & Autonomous Systems
Long Title:Robotics & Autonomous Systems
Module Code:COMP9069
 
Credits: 5
NFQ Level:Expert
Field of Study: Computer Science
Valid From: Semester 1 - 2018/19 ( September 2018 )
Module Delivered in no programmes
Module Coordinator: TIM HORGAN
Module Author: Sean McSweeney
Module Description: The application of reinforcement learning to robotics and autonomous systems has the potential to transform many industries such as manufacturing, construction and logistics. Traditional robotics design requires highly controlled more-or-less stationary environments for correct operation, the integration of reinforcement learning into robotic systems is allowing robots to overcome this constraint and thus operate in unconstrained environments. Reinforcement learning in these systems results in robots that can modify behaviour with changing environmental conditions, continuously improve operation and adapt to system failures. This module will focus on the application of reinforcement learning to both articulated systems (e.g. robotic arms and walking robots) and autonomous systems (e.g. quad-copters and rovers).
Learning Outcomes
On successful completion of this module the learner will be able to:
LO1 Develop and simulate models for articulated and autonomous robotic systems.
LO2 Evaluate the applicability of reinforcement learning in robotics.
LO3 Adapt reinforcement learning algorithms to robotic motion control and autonomous applications.
LO4 Appraise the application of deep reinforcement learning to robotic systems.
Pre-requisite learning
Module Recommendations
This is prior learning (or a practical skill) that is strongly recommended before enrolment in this module. You may enrol in this module if you have not acquired the recommended learning but you will have considerable difficulty in passing (i.e. achieving the learning outcomes of) the module. While the prior learning is expressed as named CIT module(s) it also allows for learning (in another module or modules) which is equivalent to the learning specified in the named module(s).
No recommendations listed
Incompatible Modules
These are modules which have learning outcomes that are too similar to the learning outcomes of this module. You may not earn additional credit for the same learning and therefore you may not enrol in this module if you have successfully completed any modules in the incompatible list.
No incompatible modules listed
Co-requisite Modules
No Co-requisite modules listed
Requirements
This is prior learning (or a practical skill) that is mandatory before enrolment in this module is allowed. You may not enrol on this module if you have not acquired the learning specified in this section.
No requirements listed
Co-requisites
No Co Requisites listed
 

Module Content & Assessment

Indicative Content
Modelling and Simulating Robots and Autonomous Systems
Spatial descriptions and transformations, forward kinematics, inverse kinematics, jacobian matrices, modelling non-rigid robots, autonomous system kinematics. Uncertainty in robotic models. Simulation and programming tools and environments such as V-REP, ROS, Gazebo.
Reinforcement Learning
Elements of RL, Finite Markov Decision Processes, Policies and Value Functions, Partially Observable MDPs, Inverse Reinforcement Learning, Bellman Equations, Optimal Value Functions, Model Based vs Model Free Algorithms, Dynamic Programming, Monte Carlo Methods, Temporal-Difference Prediction and Q Learning.
Reinforcement Learning in Robotic Systems
Searching for parametric motor primitives, adapting parametric motor primitives to changing conditions, control prioritisation for motor primitives. Autonomous systems map building, localisation, path planning, obstacle avoidance and navigation in dynamic environments.
Deep Reinforcement Learning in Robotics
Radial Basis Function Artificial Neural Networks, Policy Gradient, TD Lambda, and Deep Q-Learning applications in robotic systems. Usage of OpenAI Gym, Tensorflow.
Assessment Breakdown%
Course Work100.00%
Course Work
Assessment Type Assessment Description Outcome addressed % of total Assessment Date
Project Project developing a simulation model of an articulated or autonomous robotic system and evaluating of the fidelity of the model developed. 1,2 40.0 Week 7
Project Project applying reinforcement learning to robotic or autonomous system, iterating and evaluating the methodology applied to environmental and system changes. 3,4 60.0 Sem End
No End of Module Formal Examination
Reassessment Requirement
Coursework Only
This module is reassessed solely on the basis of re-submitted coursework. There is no repeat written examination.

The institute reserves the right to alter the nature and timings of assessment

 

Module Workload

Workload: Full Time
Workload Type Workload Description Hours Frequency Average Weekly Learner Workload
Lecture Lecture delivering theory underpinning learning outcomes. 2.0 Every Week 2.00
Lab Practical computer-based lab supporting learning outcomes. 2.0 Every Week 2.00
Independent & Directed Learning (Non-contact) Independent & directed learning 3.0 Every Week 3.00
Total Hours 7.00
Total Weekly Learner Workload 7.00
Total Weekly Contact Hours 4.00
Workload: Part Time
Workload Type Workload Description Hours Frequency Average Weekly Learner Workload
Lecture Lecture delivering theory underpinning learning outcomes. 2.0 Every Week 2.00
Lab Practical computer-based lab supporting learning outcomes. 2.0 Every Week 2.00
Independent & Directed Learning (Non-contact) Independent & directed learning 3.0 Every Week 3.00
Total Hours 7.00
Total Weekly Learner Workload 7.00
Total Weekly Contact Hours 4.00
 

Module Resources

Recommended Book Resources
  • Sutton, Richard S and Barto, Andrew G 1998, Reinforcement learning: An introduction, MIT press Cambridge [ISBN: 9780262193986]
Supplementary Book Resources
  • Jens Kober and Jan Peters 2014, Learning Motor Skills From Algorithms to Robot Experiments, Springer International Publishing [ISBN: 9783319031941]
  • Todd Hester 2013, TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains, Springer International Publishing [ISBN: 9783319011677]
Recommended Article/Paper Resources
  • Kober, Jens and Bagnell, J Andrew and Peters, Jan 2013, Reinforcement learning in robotics: A survey, The International Journal of Robotics Research, 32, no 11, pp 1238-1274
  • Cully, Antoine and Clune, Jeff and Tarapore, Danesh and Mouret, Jean-Baptiste 2015, Robots that can adapt like animals, Nature Research, 521, pp 503-507
  • Ijspeert, Auke Jan 2008, Central pattern generators for locomotion control in animals and robots: a review, Elsevier Journal on Neural networks, Vol 21, No 4, pp 642-653
Supplementary Article/Paper Resources
  • Chatzilygeroudis, Konstantinos and Rama, Roberto and Kaushik, Rituraj and Goepp, Dorian and Vassiliades, Vassilis and Mouret, Jean-Baptiste 2017, Black-Box Data-efficient Policy Search for Robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • Cutler, Mark and How, Jonathan P 2015, Efficient reinforcement learning for robots using informative simulated priors, IEEE International Conference on Robotics and Automation (ICRA), pp 2605-2612
  • Abbeel, Pieter and Coates, Adam and Quigley, Morgan and Ng, Andrew Y 2007, An application of reinforcement learning to aerobatic helicopter flight, Advances in neural information processing systems, pp 1-8
Other Resources
 

Cork Institute of Technology
Rossa Avenue, Bishopstown, Cork

Tel: 021-4326100     Fax: 021-4545343
Email: help@cit.edu.ie