Putting the magic in the machine since 1980.

Saturday, December 20, 2008

Multiagent Security

I was recently intervied by Mark Ingebretsen from IEEE Intelligent Systems about multiagent security, on which I did some work a few years ago. I don't know if my questions will appear on a future issue or not so I am posting my answers here.

Before I answer these questions I want to explain what I mean by a multiagent system and how I envision the future of security using multiagent systems.

Prototype Architecture When designing a multiagent system what we actually build is an interaction protocol which then dictates the agents' strategies. For example, I consider bittorrent to be the most successful multiagent system currently deployed. Bittorrent consists of an interaction protocol and a number of clients written anyone who wants to. Of course, the agents in bittorrent (the clients) are human/machine hybrids. The users get to make many decisions such as which files to upload and download, how long to seed a connection, how to limit bandwidth, etc. In a pure machine-only multiagent system all those decisions would be made by the software. I don't think we will want many machine-only multiagent systems, most will be hybrids.

In the case of a security multiagent system, I envision an open system where agents act like sensors in a distributed sensor network and publish or relay important information to each other. That is the first deployment step will be sensing. Once we get that working we can start thinking about having the agents act on that information: shutting down rogue PCs, dropping packets, filtering certain protocols, applying patches, etc. That is, the second deployment step will be acting. Of course, as we give the agents more and more capabilities we need to prepare for the possibility that some agents might start taking actions against the system.

Are multi-agent systems best used within a single organization's computer network or can they function as effectively if they reside on multiple connected networks? Similarly, should multi-agent systems be allowed to spread freely throughout the Internet (e.g. via voluntary downloads) or is it important that their propagation be strictly controlled?

The best way for multiagent security to be effective is by having one world-wide multiagent security protocol. Organizations would be free to choose to participate or not and there would be many different levels of participation. For example, at its simplest a company could offer a REST page with data on the security status of its internal network. The data on this page could be more detailed for a local connection than for outside connections, in case the owners are concerned about privacy. The data would be used by agents on each machine, whether local or remote, to detect and stop security threats. The same REST interface might then be extended to allow outside parties to make reports or requests. For example, an outside agent might ask another one to shut down a particular connection coming from its domain because it believes it to be a DoS attack.

Thus, I don't see agents “spreading” throughout the Internet. Instead, the protocol will be freely available and organizations will decide which parts of it to implement. Each organization must decide what information to make public, how to use information from others, and how to handle outside requests. The growth of the system is dictated by the individual desires of the participants.

Have multi-agent systems evolved to the extent that they can take collective action to actually halt a network threat? If the answer is yes, what sort of actions do they take? If no, is this a viable goal that we can expect to see implemented at some point.

I am unaware of any deployed systems that take autonomous action based on aggregate data, but there is no technical reason why these cannot exist. One problem obtaining the system administrators' trust. However, I do expect that as technology matures and research prototypes demonstrate their capabilities we will see more autonomous security systems.

What are the dangers and possible consequences that might occur if the agents were to misidentify a legitimate communication as a threat? Could the result be a serious slowing of network traffic?

That is exactly the type of problem one must keep in mind when designing an interaction protocol. The simplest way to minimize the threat of error is to minimize the agents' capabilities: if they can't do much then they can't do much damage. As we start giving them more capabilities, such as shutting down computers and networks, the threat of misuse becomes real. At this point we start looking at human management of the multiagent system. That is, the agents should present the system administrator with their case for why they want to perform a given action but only the administrator's password would allow the system to take the action. Note that this administrator only has control over his organization's agents.

Are there provisions built in allowing some sort of universal over-ride of the agents' collective actions? If so, who should have the authority to halt actions by the agents?

A universal override is a bad idea as it becomes a target for abuse. Notice that there is no universal override for the Internet or the web. I consider this a feature. In open multiagent systems we strive to distribute power, that is, to minimize the power of the most powerful agent in the system. In this way we also minimize the possibility of a catastrophe, either planned or accidental. A universal over-ride goes against these design guidelines.

Is there any danger that the agents themselves might be co-opted by a clever hacker and used to undermine a network?

Yes, individual agents can always be co-opted, that is the reality for every engineered product. But, a correctly designed protocol would have taken into account the possibility of rogue agents so their impact should be minimal. Also, a good system minimizes the power of the most powerful agent so a few compromised agents should not present too much of a problem.

Are the agents trained? If so, how? Through simulations of network activity incorporating known past threats? Or is it better to allow the agents to monitor actual network activity?

There is ongoing work on applying machine learning techniques to network activity in order to detect what is normal versus what is abnormal behavior: like an immune system. I believe that work shows a lot of promise, especially once we let these agents communicate with each other since they could then share local information in order to get a global view of an ongoing threat. For example, if an agent detects an abnormally high set of packets coming from another domain it could tell an agent on that domain, thereby possibly alerting him to a security problem within its network.

What are some of the new developments in this area that you see as particularly important?

The growth of semantic web technologies—semantic markup languages, inference engines, and web services—will greatly help speedup the adoption of open multiagent systems, such as the envisioned world-wide multiagent security protocol.

2 comments:

Yudz said...

Hello,
I am an undergraduate university student doing my final year. my dissertation involves multi-agent systems. the title is Timetabling system using Multi-agent systems. I have also gone though your pdf ebook on Multi agent systems.

I would like to have your opinion on the following:
1. Why would i use agents, and not code directly? (i suppose because of complexity, or the complex algorithms? im interested in a complete answer that will satisfy the external examiner)
2. what are the services required in agents? (is it the Negotiations, Auctions, etc you mentioned in the ebook?)
3. What are the limits of multi-agents? (what can be done and what cannot be done?)

I would be grateful if you could guide me through this.

Thanks in advance,
Yudish

jmvidal said...

Ok, I'll give it a shot:

1- Agents are code. I think what you mean to ask is why add the extra abstraction layer? The answer is simple, add it only if it will make your program easier. If all you need to do is return the current weather conditions, then you don't need agents, but if you need to implement a negotiation algorithm then you are implementing a multiagent algorithm.

2-The word "agent" means different things to different people.

3-Multiagent systems are just distributed algorithms implemented by autonomous programs. So, you are asking, what can and cannot be done by computers? Well, we know what computers can do, on the other hand, what computers cannot do is a tricky question.