Skip to content

The official code of the paper Hierarchical Multi-Robot Pursuit with Deep Reinforcement Learning and Navigation Planning(YAC2024)

License

Notifications You must be signed in to change notification settings

colourfulspring/ml-agents-carcatching

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Installation

  • Clone this repo to your machine.
  • Install Unity Hub based on the instruction of this link.
  • Install Unity 2022.3.4f1 in Unity Hub at this link. Find the currect version in the list and click the blue button with text "Unity Hub".
  • Add the path '/Project' as a new project in the Unity Hub. Then open the project.
  • In the explorer below the Project panel, enter the path "Assets/ML-Agents/Examples/CarCatching/Scenes". Then drag the CarCatching.unity file to the Hierarchical panel at top left.
  • Click the Play button at top middle place. It's icon is "▷" . Then you will see three agent robots(blue) are catching one enemy robot(purple) as the following image shows: An example of trajectory

Training

  • Install MLAgents Python and MLAgents Unity extension based on the steps at this link.
  • Click "File->Build Settings". In the opening window, choose the scene 'ML-Agents/Examples/CarCatching/Scenes/CarCartching' in "Scenes in build". Then choose the target platform. Finally, click the Build button at bottom right.
  • Open a terminal and change directory to the root path of this repo. If you use conda, then activate the Python environment with MLAgents.
  • Run this command to training
mlagents-learn config/ppo/Catching.yaml --env=./Builds/CarCatching/CarCatching.x86_64 --run-id=v8.0.0 --width=2000 --height=1000

  • If you run the above command for many times, please change the '--run-id=v8.0.0' to other run-ids.

Note

  • Those python files whose names begin with "tensorboard_" are python scripts used to draw the images in the paper's experiment section.
  • Below the seperating line are the original README of Unity MLAgents Toolkit.

Unity ML-Agents Toolkit

docs badge

license badge

(latest release) (all releases)

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents. We provide implementations (based on PyTorch) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. Researchers can also use the provided simple-to-use Python API to train Agents using reinforcement learning, imitation learning, neuroevolution, or any other methods. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. The ML-Agents Toolkit is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities.

Features

  • 17+ example Unity environments
  • Support for multiple environment configurations and training scenarios
  • Flexible Unity SDK that can be integrated into your game or custom Unity scene
  • Support for training single-agent, multi-agent cooperative, and multi-agent competitive scenarios via several Deep Reinforcement Learning algorithms (PPO, SAC, MA-POCA, self-play).
  • Support for learning from demonstrations through two Imitation Learning algorithms (BC and GAIL).
  • Quickly and easily add your own custom training algorithm and/or components.
  • Easily definable Curriculum Learning scenarios for complex tasks
  • Train robust agents using environment randomization
  • Flexible agent control with On Demand Decision Making
  • Train using multiple concurrent Unity environment instances
  • Utilizes the Unity Inference Engine to provide native cross-platform support
  • Unity environment control from Python
  • Wrap Unity learning environments as a gym environment
  • Wrap Unity learning environments as a PettingZoo environment

See our ML-Agents Overview page for detailed descriptions of all these features. Or go straight to our web docs.

Releases & Documentation

Our latest, stable release is Release 20. Click here to get started with the latest release of ML-Agents.

You can also check out our new web docs!

The table below lists all our releases, including our main branch which is under active development and may be unstable. A few helpful guidelines:

  • The Versioning page overviews how we manage our GitHub releases and the versioning process for each of the ML-Agents components.
  • The Releases page contains details of the changes between releases.
  • The Migration page contains details on how to upgrade from earlier releases of the ML-Agents Toolkit.
  • The Documentation links in the table below include installation and usage instructions specific to each release. Remember to always use the documentation that corresponds to the release version you're using.
  • The com.unity.ml-agents package is verified for Unity 2020.1 and later. Verified packages releases are numbered 1.0.x.
Version Release Date Source Documentation Download Python Package Unity Package
develop (unstable) -- source docs download -- --
Release 21 October 9, 2023 source docs download 1.0.0 3.0.0

If you are a researcher interested in a discussion of Unity as an AI platform, see a pre-print of our reference paper on Unity and the ML-Agents Toolkit.

If you use Unity or the ML-Agents Toolkit to conduct research, we ask that you cite the following paper as a reference:

@article{juliani2020,
  title={Unity: A general platform for intelligent agents},
  author={Juliani, Arthur and Berges, Vincent-Pierre and Teng, Ervin and Cohen, Andrew and Harper, Jonathan and Elion, Chris and Goy, Chris and Gao, Yuan and Henry, Hunter and Mattar, Marwan and Lange, Danny},
  journal={arXiv preprint arXiv:1809.02627},
  url={https://arxiv.org/pdf/1809.02627.pdf},
  year={2020}
}

Additionally, if you use the MA-POCA trainer in your research, we ask that you cite the following paper as a reference:

@article{cohen2022,
  title={On the Use and Misuse of Absorbing States in Multi-agent Reinforcement Learning},
  author={Cohen, Andrew and Teng, Ervin and Berges, Vincent-Pierre and Dong, Ruo-Ping and Henry, Hunter and Mattar, Marwan and Zook, Alexander and Ganguly, Sujoy},
  journal={RL in Games Workshop AAAI 2022},
  url={http://aaai-rlg.mlanctot.info/papers/AAAI22-RLG_paper_32.pdf},
  year={2022}
}

Additional Resources

We have a Unity Learn course, ML-Agents: Hummingbirds, that provides a gentle introduction to Unity and the ML-Agents Toolkit.

We've also partnered with CodeMonkeyUnity to create a series of tutorial videos on how to implement and use the ML-Agents Toolkit.

We have also published a series of blog posts that are relevant for ML-Agents:

More from Unity

Community and Feedback

The ML-Agents Toolkit is an open-source project and we encourage and welcome contributions. If you wish to contribute, be sure to review our contribution guidelines and code of conduct.

For problems with the installation and setup of the ML-Agents Toolkit, or discussions about how to best setup or train your agents, please create a new thread on the Unity ML-Agents forum and make sure to include as much detail as possible. If you run into any other problems using the ML-Agents Toolkit or have a specific feature request, please submit a GitHub issue.

Please tell us which samples you would like to see shipped with the ML-Agents Unity package by replying to this forum thread.

Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to let us know about it.

For any other questions or feedback, connect directly with the ML-Agents team at [email protected].

Privacy

In order to improve the developer experience for Unity ML-Agents Toolkit, we have added in-editor analytics. Please refer to "Information that is passively collected by Unity" in the Unity Privacy Policy.

About

The official code of the paper Hierarchical Multi-Robot Pursuit with Deep Reinforcement Learning and Navigation Planning(YAC2024)

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published