DARPA’s New Unmanned Aerial System: The Gremlin

As the technology behind Artificial Intelligence has increased exponentially the DoD’s advanced research division, known as DARPA (Defense Advanced Research Projects Agency), has been working diligently on a way to implement the growing tech into a swarm of unmanned drones. Working with the defense contractor Dynetics, the final product was revealed in early May 2018. The Gremlin drones are meant to work in small teams or even “swarms” with complex data-links to allow them to communicate with each other and with a ground station or even an F-35 to relay information about potential targets, anti-aircraft weapons, and other hostile forces behind enemy lines.

The thing that sets these Gremlins apart from other Unmanned Aerial Vehicles (UAVs) is their ability to be both deployed and recovered by an already flying aircraft. The drones are currently set up to be deployed from a C-130 Hercules, up to four at a time. In the future, however, almost any of the United States’ aircraft would be able to deploy a Gremlin. Because of their small size and large numbers, these drones could either sneak past or overwhelm areas with advanced air defense systems.

With the Gremlins’ relatively cheap cost compared to other UAVs, the Air Force can more readily send the drones into areas that are heavily defended to obtain information and reconnaissance instead of the more expensive UAVs or manned aircraft. The Gremlins will undoubtedly become a valuable asset as soon as they are deemed field ready. Dynetics is currently still finalizing testing on recovery and other aspects of the mission, but they estimate the Gremlins will be ready for flight by late 2019.

 

Attached is a short youtube video by Dynetics that gives a great look into the Gremlin.

Business Insider has a great overview of the Gremlin Project. https://www.businessinsider.com/darpa-releases-video-showing-how-its-gremlin-drone-program-will-work-2018-5

5 thoughts on “DARPA’s New Unmanned Aerial System: The Gremlin

  1. The idea of implementation of AI in drones is really interesting and I believe it is going to be very beneficial in the use of reconnaissance. However, right now, people also think to use this technology to attack and defense missions, but I think they underestimate the possible results of using this technology to kill people. I agree that AI is very sophisticated and powerful technology; but, still it is not a reliable technology to give responsibility to destroy the target. AI machine teaches itself by using given rules which are created by humans. In most cases, AI machine learns what we anticipate from it to learn, but unfortunately, in real life, there is no guarantee of it. AI technology is basically consists of bunch of 0’s and 1’s and it tries to evaluate these binary numbers to make one accurate decision. Unfortunately, it is hard to represent dynamically changing environment with these binary numbers because in real life “Things are not quite so simple always as black and white”(Doris Lessing). There are also lots of grey areas inside our lives. Right now, we cannot show these grey areas to AI machines with current technology. Therefore, we cannot expect these drones to make the best decisions in critical situations without giving them an ability to see the grey areas in life. Until scientists find a more reliable solution for this problem, I think we shouldn’t use AI implemented drones to destroy targets.

    https://www.tandfonline.com/doi/pdf/10.1080/00963402.2017.1290879

  2. This is a very interesting topic. AI has limitless military applications, but there always seems to be a debate over anything AI when it comes to serious topics like the military. I read an article ( https://eandt.theiet.org/content/articles/2018/06/google-announces-end-to-controversial-military-ai-project/ ) about Google having to cancel their contract for “Project Maven” with the pentagon due to controversy and bias towards using their AI for military purposes. Basically, Project Maven had a plan to automate reconnaissance for drones and aerial vision. Their software could automatically point out things associated with what the drones and UAVs were looking at, such as things inside a building. The project got a lot of controversy form google employees and other people alike. So overall, a seemingly non-lethal form of military AI use STILL had bad controversy associated with it.

    Google is now reviewing their guidelines and ethics for these kind of things so they can maybe work with the Pentagon again in the future. But like I said, there will always be controversy when it comes to combining AI and military.

  3. Advanced Weaponized AI is a growing and innovative new industry, and it can be both dangerous and effective. I can understand the concern for innocent lives being in the targeting zone for many of these drone strikes, and it is an issue to try and work out. I think that these new drones will provide a more efficient and effective way of carrying out operations, as they have this advanced networking system, and the impressive recall mechanisms. It’s very innovative how they don’t need to completely redesign the cargo aircrafts in order to accommodate for this new technology. The fact that this new technology provides a “robust, responsive and affordable” solution to AI warfare is a huge leap in the right direction. Of course any military technology doesn’t come cheap, but the fact that they are significantly cheaper than most military drones such as UAV’s aides in the spending we put into military operations.
    While this new technology is surely innovative and useful to our military operations, there is the problem of having the technology fall into the wrong hands. Even though we have effective ways planned for the gremlin to carry out operations and return safely, there is always the issue that the operation might fail, and the technology from our drones could fall into the hands of terrorists. While some countries may not have the budget or the means of creating these technologies, countries such as Russia and China certainly do. Of course, these drones will not be used over such countries, but there is always the possibility of terrorists selling information. It’s a slight chance but it’s always a good concern to keep in mind, moving forward as we create more and more unmanned military aircrafts and tech.

  4. I can understand your concern for the preservation of lives especially when it comes to weaponizing of AI technologies. If you read into the Gremlin program, it seems (at least to me) that the AI is being used largely to fly the drones with less direct piloting input like earlier generations of UAVs. While these drones are certainly intimidating, they are currently largely meant for reconnaissance and gaining knowledge rather than for strike missions similar to almost all of the early military drones. The point is to keep our pilots out of regions that are unknown or dangerous.
    Drones, like all military technology, are constantly improving and with this improvement comes more precision strikes. I truly hope that we can move towards more diplomacy and peaceful talks, but as air defense systems and radar are rapidly increasing in capability (including detecting stealth aircraft) we need to work on solutions to combat these systems without putting our pilots and hundreds of million dollar jets in jeopardy.

  5. This is very interesting to be because I always thought it was only a matter of time before Countries try to weaponize AI. While you may see the Greamlen and the US military using the new-found technologies create and new type of global powerhouse weapon as interesting, it is also terrifying. I a going to take an ethics look at this technology because we can never forget that when discussing new technologies. Drones in theory are great, no peoples lives have to be lost to carry out missions. Weather those missions are intelligence gathering or bombings. The problem is these types of technologies while effective have posed problems with morality. Innocent bystanders killed due to them being in the same building as a target or even simply miss fires. The biggest problem with drones in my opinion however is they go in to places carry out their mission and their missions are never explained. This leads to a hatred developed by people who have been in contact with them. The idea of flying aircraft carriers to me simply means that more drones will be used, and more innocent lives taken. I understand that they are now outfitted with artificial intelligence but something tells me that the people designing this are more interested will the destruction of the bad then the preservation of the good.
    I also have a fear that technologies like this mean we are moving further and further away from trying to solve problems diplomatically. The solution of this new technologies looks to me like you go in gather intelligence of a potential issue and then blow them, all before they even know what hit them.
    “We found that one in five of the coalition strikes we identified resulted in civilian death, a rate more than 31 times that acknowledged by the coalition.” This technology seems to have no answer for this problem.

    https://www.nytimes.com/interactive/2017/11/16/magazine/uncounted-civilian-casualties-iraq-airstrikes.html

Leave a Reply