Multi-Agent Planning and Learning Lab (MAPLE)
In many situations in life, humans must take the knowledge they gain in one environment in use it in another. One goal for artificially intelligent
agents is to get them to do the same when learning the environments they are in.
As a member of the MAPLE Lab at UMBC, I worked on generalizability with artificially intelligent agents.
More specifically, my research involved knowledge transfer across different domains.
An Object-Oriented GridWorld
The classic GridWorld domain, a navigation-based domain that works as a grid. The agent can move in the north, south, east, and west directions.
Along with another student, I helped to design an object-oriented approach to this domain in which every piece of the domain was an object. This would serve
as a good training domain for incoming students to learn the basics domain design and agent learning.
Often, much is not known about a domain when in it, so the agent must learn as it exists.
For example, someone in a dark room must find the lights and figure out what they are touching as they navigate.
One of my projects involved trying to develop a partially observable domain with the same concept.
This domain was called RockSample, based on a previously developed domain.
It involved an agent that traversed a rock domain and must collect rocks with certain value.
The qualities of the rocks were not known until the agent made contact with them.
Abstract Markov Decision Processes, or AMDPs, are Markov Decision Processes with layers of knowledge abstracted out.
More simply, the agent does not consider certain information until it completes some prerequisite task.
As an independent study project, I compared the learning rates of different A.I. learning methods, such as QLearning,
MAXQ, and AMDP methods. I also tested increases in scale of the domain to determine how the learning rate was affected by the increases