- Title
- Deep neural networks for robot vision in evolutionary robotics
- Creator
- Watt, Nathan
- Subject
- Gqeberha (South Africa)
- Subject
- Eastern Cape (South Africa)
- Subject
- Neural networks (Computer science)
- Date
- 2021-04
- Type
- Master's theses
- Type
- text
- Identifier
- http://hdl.handle.net/10948/52100
- Identifier
- vital:43448
- Description
- Advances in electronics manufacturing have made robots and their sensors cheaper and more accessible. Robots can have a variety of sensors, such as touch sensors, distance sensors and cameras. A robot’s controller is the software which interprets its sensors and determines how the robot will behave. The difficulty in programming robot controllers increases with complex robots and complicated tasks, forming a barrier to deploying robots for real-world applications. Robot controllers can be automatically created with Evolutionary Robotics (ER). ER makes use of an Evolutionary Algorithm (EA) to evolve controllers to complete a particular task. Instead of manually programming controllers, an EA can evolve controllers when provided with the robot’s task. ER has been used to evolve controllers for many different kinds of robots with a variety of sensors, however the use of robots with on-board camera sensors has been limited. The nature of EAs makes evolving a controller for a camera-equipped robot particularly difficult. There are two main challenges which complicate the evolution of vision-based controllers. First, every image from a camera contains a large amount of information, and a controller needs many parameters to receive that information, however it is difficult to evolve controllers with such a large number of parameters using EAs. Second, during the process of evolution, it is necessary to evaluate the fitness of many candidate controllers. This is typically done in simulation, however creating a simulator for a camera sensor is a tedious and timeconsuming task, as building a photo-realistic simulated environment requires handcrafted 3-dimensional models, textures and lighting. Two techniques have been used in previous experiments to overcome the challenges associated with evolving vision-based controllers. Either the controller was provided with extremely low-resolution images, or a task-specific algorithm was used to preprocess the images, only providing the necessary information to the controller.
- Description
- Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2021
- Format
- computer
- Format
- online resource
- Format
- application/pdf
- Format
- 1 online resource (xiii, 151 pages)
- Format
- Publisher
- Nelson Mandela University
- Publisher
- Faculty of Science
- Language
- English
- Rights
- Nelson Mandela University
- Rights
- All Rights Reserved
- Rights
- Open Access
- Hits: 872
- Visitors: 833
- Downloads: 1
Thumbnail | File | Description | Size | Format |
---|