Putting the magic in the machine since 1980.

Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Tuesday, November 17, 2015

Periop Mobile Learning System

We have a first version of the Periop Mobile Learning System website which features some screenshots of the the Periop-MLS Android app we have built and will be testing on the Greenville Health System hospital. Periop-MLS communicates with our own backend server, or with a commercial backend.

We also have a second Android app that is currently being tested in GHS, more later.

Thursday, October 9, 2014

Automatic Generation of Agent Behaviors

This past Summer at AAMAS my PhD student Bridgette Parsons and I published a paper with an algorithm that takes raw observational data and builds an agent behavioral models. Basically, we have lots of data of what nurses do in a hospital (with timestamps) and used it to build an MDP-like model of the "average" nurse. Its not really an MDP because we have loops, and its not really "average" in the mathematical sense. You can get the paper.

Bridgette is now building a first-person shooter game which she will use to gather data on gamers' play. We will then use this data to build a player model. That is, we plan to automatically build NPCs that behave like real players.

Wednesday, November 6, 2013

iMedTracker: Offline Webapp for Tablets

The scheenshot on the right is of a webapp I developed to help track the movement of nurses in a hospital. The webapp is to be used by someone tracking a nurse and is optimized for iPads. The user taps on each event as it happens and the app records the time and duration of all events.

The app uses jquery mobile and localStorage so all the data is stored on the iPad, since there is no wifi on the hospital floor (that we can use). At the end of the day the user can then upload all the data to the server by the click of a button.

We then use this data to feed our 2D Netlogo models, and 3D Unity models, which you can learn more about by reading some of our recent papers.

This work is part of research we are doing with the School of Nursing into building agent-based models of workflows inside a hospital, and how these might be used to help reduce errors, and optimize flow.

Android and Speech Recognition Hacking

As part of this project, I developed an Android app that uses the built-in voice recognition to let nurses check items for a patient as he is getting ready for surgery.

Voice recognition is was found to be not good enough, for some words. We are examining possible ways of post-processing the results that come back from Google in order to improve the accuracy, for our domain. For this, I am starting with simple Bayesian learning techniques, but, we might try other techniques in the near future.

The app you see on the right was just for testing. I have another app which implements the actual checklists. We plan to tie it to the hospital's Electronic Medical Records systems using their REST API. Fun stuff!

As always, if you are a USC student (grad or undergrad) interested in hacking on think kind of stuff, just email me. We are also building 3D simulations in Unity.

Thursday, June 14, 2012

Constructive Network Formation Models

Gary Fredericks recently presented a poster at AAMAS on our recent work on An Analysis of Constructive Network Formation Models.

This is really interesting theoretical work where we are looking at the possible topologies that can emerge given initial payment rules. For example, given a bunch of disconnected nodes (think ISPs before buying access to any other ISP, or businesses before establishing relationships with any other, etc.) and a rule that says that any link between two nodes has a fixed cost which must be paid by the two nodes in questions, but where everyone benefits from being closer (fewer edges between them) to everyone else, what kind of topologies will we expect to emerge if we let all agents build edges as they see fit?

We examine this problem in part by building graphs of graphs. That is, we built directed graphs, such as the one you see here, that show the possible evolution of these networks over time. From these we can then determine which topologies, or which features (small-world, small-diameter, etc.) are more likely given different initial rules of the game (equal payment for edge, one must pay, auctions, etc.)

Saturday, March 17, 2012

Web Applications Class, for Graduate Students

I will be teaching CSCE 242: Web Applications again in the Fall. Go sign up, please!

Also, if you are a graduate student and want to take it come talk to me, we can probably work out that you can get CSCE 798 (directed study) for it. And, if you are a PhD student interested in doing a thesis related to web applications, online communities, or agent-based simulations (even better, all of these together!) come talk to me!

Friday, June 18, 2010

Resiliency in Supply Chain Networks

Andrew Smith and I have been looking at the problem of resiliency in supply chain networks. That is, given the fact that supply chains are formed by selfish agents who create links with others based on their own local interests, we cannot expect the resulting network to always have the optimal resiliency characteristics. It is quite possible that because everyone is acting myopically—agents care only about their immediate neighbors—that the resulting network will have some bad global properties. The global property Andrew studies first is resiliency, which we define as the ability of the network to withstand the loss of one node.

Andrew presented his paper titled A Practical Multiagent Model for Resilience in Commercial Supply Networks at the latest Agent-Mediated Electronic Conference. He implements a model which generates supply chain networks based on a well-known model from management theory, so the networks' topology should match real world networks, and then examines the resiliency of the resulting networks. The results are not surprising in the general sense (more connections mean more resiliency) but they do give us a quantitative measure of network resiliency.

Below are the slides from his presentation:

Friday, May 7, 2010

SeaPort Container Terminal Simulation

Port of Long Beach

Next week I will be presenting our (with Nathan Huynh) paper on agent-based seaport container terminal simulation at the workshop on agents in traffic and transportation.

Basically, we used netlogo to build a simulation of a container seaport terminal, like the one you see on the photo. We focused only on the decision-making process of the cranes. Those large cranes you see have to pick up the containers and place them on the trucks as they arrive. Since the cranes are slow and there are many trucks the crane operators have to trade-off several competing interests when making their decision: trucks want to get out of there as quickly as possible, it takes the crane a long time to travel the length of the seaport, the port manager wants to maximize the number of trucks served but without making any one truck wait too long.

In this paper we tried the more obvious utility functions to see how they performed in a seaport with 2 or 3 cranes. The utility functions used are:

  1. distance-based: roughly, go to the nearest truck, but avoid getting on the way of the other cranes,
  2. time-based: roughly, serve the trucks on a first-come first-serve based, while trying to stay out of the way of other cranes,
  3. a mixture of the above two.

Test results showed that the distance-based won, by a lot. It won not only on the throughput, as it obviously would, but also on the fairness measures, which is not so obvious.

I'll be putting up the netlogo model on my MAS Netlogo models page soon. The slides from the talk I'll deliver on Tuesday are below:

We are currently working on expanding this model, both from a programming, simulate more parts of the supply chain, and the analysis side, put some bounds on the solution quality that can be achieved assuming local control and local information.

Monday, April 5, 2010

Multiagent Systems Introduction

Last week it was my pleasure to give an invited talk at Temple University, part of their Robert M. and Mary Haythornthwaite Foundation Distinguished Lecture Series. I tried to provide a high-level overview of multiagent systems is all about, which is always a huge challenge as the field covers a lot of disparate areas. After some thought, I decided to pick three applications and
  1. describe all the issues relevant to the problem,
  2. show how we use theory (either algorithms, game theory, or Economics) to get a better understanding of the underlying issues,
  3. explain how there is still much engineering that needs to be done because even the best theory or algorithm still millions of details, and software is just a collection of details.
For posterity sake, below are the slides I used. The slides contain as little text as possible, which I think generally makes for a better talk.

Wednesday, February 17, 2010

Current Research Talk

This Friday at 2:30pm in 300 S. Main B102 I will be given an informal talk about our current research projects for our CSCE 791 class. This is open to anyone else who is interested. It will be a long rambling and highly disorganized version of my previous 7 minute talk.

Friday, November 13, 2009

Ongoing Research Projects

Every year our department has 7-minute madness presentations in which we faculty present our ongoing research. It is a great opportunity to hear about all the innovative research going on in our department (especially in the area of multiagent systems, of course :-). The slides from my 7-minute talk are below.

For those of you who could not make it, and since I ran out of time after just a couple of slides, I write a bit about each project below.

We have a lot of papers published on distributed combinatorial auctions, especially on bidding algorithms for the PAUSE auction. This work was done with Benito Mendoza who received his PhD in May and is now working at Exxon on multiagent simulations. I continue to work on this topic but with a slightly different focus: viewing these distributed auctions as negotiation networks.

The iCoach project is with prof. Sara Wilcox from the department of exercise science. Chuck Burris, and undergraduate, has been developing on the Google App Engine a webapp that will send customized SMS messages to users by first gathering information from the user's phone, pedometer, and online information (where he is, via GPS, how much he has moved, via pedomenter or accelerometer, what his plans are, via his google calendar, etc.). There is a lot of information about us on the net, and the new smartphones will give us even more. However, aside from collecting and displaying this information back to the user in a pretty graph, there has been very little research done in to how to use this information to improve our lives. That is what we study. Our initial system is being designed to monitor, educate, and coach first-time pregnant women.

The wikitheoria is another Google App Engine webapp we are building, and by "we" I mean Jon Mayhak and Jason Rikard. The project is with prof. Barry Markovsky from the Sociology department and can be summarized as "wikepdia meets stackoverflow for sociologists, with added semantic structure". Our goal is to build a site that will enable and encourage sociologists to post their models using a common ontology (set of terms with agreed-upon definitions). The common ontology will also be developed on the site. The current site is currently being tested by forcing a graduate class of sociologists to use it.

The port simulation project is very new and its joint work with prof. Nathan Huynh from the department of Civil Engineering. Nathan is an expert in ports and trucking problems. In our initial collaboration we are looking at the problem of how the crane operators in a port should act or cooperate. The job of a crane operator is to pick up those big containers, one at a time, and put them on the trucks as the trucks arrive. The containers can be stacked up so sometimes the crane operators have to do some re-stacking of the containers, which wastes a lot of time. We are building an agent-based simulation of this problem and trying to find best strategies for the operators.

Andrew Smith is a new PhD student who has already written a paper on supply chain resiliency. Using standard models of supply-chain formation we generated sample chains and then tested these topologies for resiliency to single-point attacks. That is, if one node goes down how does that affect the network as a whole. In the paper (not yet published) we present a numerical measure of resiliency and our test results show how it varies given the number of relationship resources (a measure of sociability) and size.

Andrew is also continuing his work with Karim Mahrous from Sandia National Labs on media dispersion and influence as part of his PhD thesis. This is another agent-based modeling project, albeit a much larger one for which we only have to develop small parts of the code, which aims to model how news travels in a social network. The project will cover everything from broadcast media, to multicast media (twitter), to one-on-one via electronic (SMS) or good ol fashioned conversations over lunch.

So, if you are a graduate student and find these ideas interesting, you can sign up for my Multiagent Systems class in the Spring which will cover the basic background knowledge needed to build, and understand, multiagent systems. If you are from a funding agency or company with a few bucks to spare for research, I would love to hear from you!

Friday, July 3, 2009

Best Way to Bid in a Distributed Combinatorial Auction?

During the last year Benito Mendoza examined how different bidding strategies (really, algorithms) fare under the PAUSE auction. I would like to go over these results but first, some background.

A combinatorial auction is one where there are many items for sale at the same time and bidders can place bids on multiple items. For example, a bid might be "I'll pay $10 for items A,B, and X." These bids are then sent to the auctioneer which must find the set of winning bids that maximize revenue—– problem that is NP-complete. Check out this visualization of a combinatorial auction that I developed for class.

In the PAUSE auction there is no centralized auctioneer. The auction instead proceeds in steps. At the first step we hold parallel English auctions for each item. That is, everyone bids on every item individually. Thus, at the end we will have a (temporary) winning bid for each item. At the second set there is one auction but now all bidders must place bid-sets that cover all items in the auction, but they can use bids that the other agents placed before. Also, they can place bids that cover two items. For example, an agent might place a bid-set that includes a bid from itself for items A, and B, as well as bids, from other agents, for items C, D, and F. By restricting the agents to place complete bid-sets we are, in effect, forcing the bidding agents to solve the winner-determination problem using the existing public bids.

Well, sort of, because bidders are selfish they will want to place the bid that wins them the most items, for the least money (the bid-set that maximizes their utility). Two years ago we developed the PAUSEBID algorithm which a bidder can use to find the myopically-optimal bid-set: the bid-set that, if the auction were to end at that moment, maximizes the bidders's utility. The problem is that since this is an NP-complete problem PAUSEBID cannot be used in auction with more than about a dozen items. Thus, the next year we developed approximate bidding algorithms which have linear running time (way faster). This was good.

But, it raised another question: if the other bidders are using PAUSEBID, should I also use PAUSEBID, and spend all time computing, or should I use GREEDYPAUSEBID and save all that computational time at the risk of getting a lower utility? That is, should I think a lot about my next bid or just kinda wing it? The surprising results, which have to yet been published except on Benito's thesis, are that it is better for all bidders to use GREEDYPAUSEBID. That is, if all the other agents are using PAUSEBID, thus picking their bids optimally, then I can actually expect slightly higher utility by bidding with a simple greedy algorithm (GREEDYPAUSEBID). Also, in the world where all the agents are using GREEDYPAUSEBID the bidders get higher utility than in the world where they are all using PAUSEBID. What happens is that prices drop down a bit and thus all the bidders (buyers) win a bit, of course, the sellers lose.

Even more interesting, we also found that in the world where all the other bidders are using GREEDYPAUSEBID, my best strategy is to also use GREEDYPAUSEBID. Again, I would actually lose utility by trying to myopically-optimize my bidding strategy. Putting all these together we find that the population where all bidders use GREEDYPAUSEBID is stable (is a Nash equilbrium).

These results go against our initial intuitions. But, we must remember that an auction with lazy bidders will result in a lower sale price which benefits, on average, all bidders. Also, a myopically-optimal bidder will reveal utility gains to the other lazy bidders which they will then exploit, thus raising the final price.

Tuesday, June 16, 2009

Mendoza PhD in Bidding Algorithms

Last month Benito Mendoza successfully defended his thesis. His research looks at the problems in establishing a distributed combinatorial auction, without the use of a central auctioneer, where the bidders have an incentive to participate and help find the best bidset.

I believe this work lays the foundation for future distributed resource allocation protocols. That is, currently companies and consumers engage in rather simple one-to-one sequential negotiations with each other: you negotiate with one car dealer, then another, then make a decision. These lead to sub-optimal allocations (translation: there was another solution where everyone in the world is happier than in the solution we ended up at). Now that all business is done on the Internet we can build sophisticated negotiation agents that will arrive at better global solutions by using the distributed resource allocation protocols that we are working on. But first, we must ensure that these protocols do find better solutions and are incentive-compatible and fair, and people want to use them (people are not always rational).

Read his thesis for more details:

Benito has accepted a position at Exxon doing agent-based modelling.

Saturday, December 20, 2008

Multiagent Security

I was recently intervied by Mark Ingebretsen from IEEE Intelligent Systems about multiagent security, on which I did some work a few years ago. I don't know if my questions will appear on a future issue or not so I am posting my answers here.

Before I answer these questions I want to explain what I mean by a multiagent system and how I envision the future of security using multiagent systems.

Prototype Architecture When designing a multiagent system what we actually build is an interaction protocol which then dictates the agents' strategies. For example, I consider bittorrent to be the most successful multiagent system currently deployed. Bittorrent consists of an interaction protocol and a number of clients written anyone who wants to. Of course, the agents in bittorrent (the clients) are human/machine hybrids. The users get to make many decisions such as which files to upload and download, how long to seed a connection, how to limit bandwidth, etc. In a pure machine-only multiagent system all those decisions would be made by the software. I don't think we will want many machine-only multiagent systems, most will be hybrids.

In the case of a security multiagent system, I envision an open system where agents act like sensors in a distributed sensor network and publish or relay important information to each other. That is the first deployment step will be sensing. Once we get that working we can start thinking about having the agents act on that information: shutting down rogue PCs, dropping packets, filtering certain protocols, applying patches, etc. That is, the second deployment step will be acting. Of course, as we give the agents more and more capabilities we need to prepare for the possibility that some agents might start taking actions against the system.

Are multi-agent systems best used within a single organization's computer network or can they function as effectively if they reside on multiple connected networks? Similarly, should multi-agent systems be allowed to spread freely throughout the Internet (e.g. via voluntary downloads) or is it important that their propagation be strictly controlled?

The best way for multiagent security to be effective is by having one world-wide multiagent security protocol. Organizations would be free to choose to participate or not and there would be many different levels of participation. For example, at its simplest a company could offer a REST page with data on the security status of its internal network. The data on this page could be more detailed for a local connection than for outside connections, in case the owners are concerned about privacy. The data would be used by agents on each machine, whether local or remote, to detect and stop security threats. The same REST interface might then be extended to allow outside parties to make reports or requests. For example, an outside agent might ask another one to shut down a particular connection coming from its domain because it believes it to be a DoS attack.

Thus, I don't see agents “spreading” throughout the Internet. Instead, the protocol will be freely available and organizations will decide which parts of it to implement. Each organization must decide what information to make public, how to use information from others, and how to handle outside requests. The growth of the system is dictated by the individual desires of the participants.

Have multi-agent systems evolved to the extent that they can take collective action to actually halt a network threat? If the answer is yes, what sort of actions do they take? If no, is this a viable goal that we can expect to see implemented at some point.

I am unaware of any deployed systems that take autonomous action based on aggregate data, but there is no technical reason why these cannot exist. One problem obtaining the system administrators' trust. However, I do expect that as technology matures and research prototypes demonstrate their capabilities we will see more autonomous security systems.

What are the dangers and possible consequences that might occur if the agents were to misidentify a legitimate communication as a threat? Could the result be a serious slowing of network traffic?

That is exactly the type of problem one must keep in mind when designing an interaction protocol. The simplest way to minimize the threat of error is to minimize the agents' capabilities: if they can't do much then they can't do much damage. As we start giving them more capabilities, such as shutting down computers and networks, the threat of misuse becomes real. At this point we start looking at human management of the multiagent system. That is, the agents should present the system administrator with their case for why they want to perform a given action but only the administrator's password would allow the system to take the action. Note that this administrator only has control over his organization's agents.

Are there provisions built in allowing some sort of universal over-ride of the agents' collective actions? If so, who should have the authority to halt actions by the agents?

A universal override is a bad idea as it becomes a target for abuse. Notice that there is no universal override for the Internet or the web. I consider this a feature. In open multiagent systems we strive to distribute power, that is, to minimize the power of the most powerful agent in the system. In this way we also minimize the possibility of a catastrophe, either planned or accidental. A universal over-ride goes against these design guidelines.

Is there any danger that the agents themselves might be co-opted by a clever hacker and used to undermine a network?

Yes, individual agents can always be co-opted, that is the reality for every engineered product. But, a correctly designed protocol would have taken into account the possibility of rogue agents so their impact should be minimal. Also, a good system minimizes the power of the most powerful agent so a few compromised agents should not present too much of a problem.

Are the agents trained? If so, how? Through simulations of network activity incorporating known past threats? Or is it better to allow the agents to monitor actual network activity?

There is ongoing work on applying machine learning techniques to network activity in order to detect what is normal versus what is abnormal behavior: like an immune system. I believe that work shows a lot of promise, especially once we let these agents communicate with each other since they could then share local information in order to get a global view of an ongoing threat. For example, if an agent detects an abnormally high set of packets coming from another domain it could tell an agent on that domain, thereby possibly alerting him to a security problem within its network.

What are some of the new developments in this area that you see as particularly important?

The growth of semantic web technologies—semantic markup languages, inference engines, and web services—will greatly help speedup the adoption of open multiagent systems, such as the envisioned world-wide multiagent security protocol.

Saturday, August 9, 2008

NSF Award for Negotiation Networks Research

I am honored to have just received NSF award 0829580 to continue my research into negotiation networks. The total award is for $234K over the next 3 years. This research is under the Theoretical Foundations program, and the Scientific Foundations of the Internet's Next Generation (SING) sub-topic.

We hope that our research will lay the scientific foundations for the Internet's next generation of protocols and deployment strategies. The project summary of theproposal explains our goals:

We propose to develop automated negotiation protocols for autonomous agents in negotiation networks, which we define as a type of bargaining problem where every agent must negotiate with a fixed set of other agents in order to arrive at a unique deal. Negotiation networks are, in effect, a natural re-formulation of a winner determination in combinatorial auctions problem as a negotiation problem. They thereby distribute the costly winner determination computation among the agents and avoid the need for unnecessary exchange of currency or trusting of third parties.

Intellectual merit: Since the problem of negotiation networks combines features of characteristic form games and bargaining games from game theory, combinatorial auctions and distributed mechanism design from Economics, network exchange theory from Sociology, and distributed algorithm design from computer science, its solution should be a significant contribution to all these fields. While several parts of the proposed problem have been studied in the various disciplines, we will bring these disparate parts within one framework and provide solutions for this very important problem. Our approach is fresh in that we focus on algorithmic solutions for the dynamic, distributed problem of resource allocation among self-interested parties, in contrast with game theory and Economics' solutions which are typically steady-state axiomatic solutions. Success in this research is potentially transformative as it will provide the foundation for the engineering of protocols and efficient algorithms for distributed resource allocation, which is a pervasive problem in our ever-growing highly-interconnected society.

Broader impacts: The negotiation networks problem is not only significant from a research perspective, it also has some very immediate applications. One such application is the multiagent enactment of workflows for the growing SOAP and REST-based (Web Services) service Internet where complex tasks are dynamically and automatically bid for and allocated among arbitrarily large number of agents. Such systems would revolutionize just-in-time manufacturing and purchasing practices by automating not only the paperwork but also the allocation of resources and contract negotiation. Another near-term application lies in the development of incentive-compatible routing mechanism for the new Internet which would provide the proper incentives for companies to deploy more bandwidth and to make their existing bandwidth available to others without having to worry about freeloaders. Further down the road, negotiation networks could even eliminate the need for predefined workflows and lead to much more efficient and dynamic allocation of resources in our economy. These adaptive supply chains would be resilient to local failures and attacks as they dynamically re-negotiate deals in response to environmental changes. In fact, with slight modifications these distributed resource allocation algorithms could be used to coordinate first-responders in an emergency situation, to program distributed environmental sensor networks, and to instruct automated robotic swarms.

Wednesday, April 30, 2008

Marriage Problem

In the marriage problem we have an equal number of men and women who we want to pair up (it is presumed they are all heterosexual). Each one has an ordered list of their prefered mates. The goal is to find the set of marriages such that no two people would rather get divorced and marry each other. That is, not two prefer each other over their assigned mates.

This problem can be solved with a simple algorithm where all the men propose to the women. Then, each woman with more than one proposal rejects all but her most preferred which she temporarily accepts. The bachelors then repeat the process by asking their most favorite woman who has not rejected them. The process continues until all women have a partner. I have a NetLogo implementation of the marriage problem that illustrates this process.

It is interesting to notice that while the women are the ones that get to turn down the men, the men do overwhelmingly better than the women. I mentioned this to my wife and she said that is why she proposed to me. Of course, in the end it is always the women who propose to us.

I am interested in finding distributed algorithms that arrive at stable solutions, that is, those where no two people would rather be married to each other than to their partner. Here is a link to a set of slides that goes into more detail on the marriage problem, in case you want to learn a bit more.

Monday, April 28, 2008

Trading Houses

Imagine that all students are assigned dorm rooms in some ad-hoc way. After each one has a room, two students might find that they both prefer the room the other one has, thus they could switch rooms and both be happier. Similarly, three students (A,B, and C) might find that A prefer's B's room, B prefers C's room, and C prefers A's room. These three students might also trade rooms in a cycle. This can go on for even larger cycles. But, how do we find these cycles?

A very simple algorithm, proposed back in the seventies, is to have each agent point to its most preferred choice. If the resulting graph has any loops (it will) then all the agents in the loops exchange rooms and drop out of the game. The remaining agents then point to their most preferred room and we repeat the process until there are no more agents left.

I have implemented a simple demonstration of this top trading cycle algorithm using a simple distributed cycle detection protocol. Its fun to watch!

Saturday, February 9, 2008

Fast Bidding in a Distributed Combinatorial Auction

Walmart needs stuff moved from point A to B, for many A's and Bs. Also, these deliveries have other possible requirements: one might need a refigerated truck, one might be a night drop off, one might be a rush order, etc. Similarly, trucking companies have complex requirements about where and when they can deliver. Put these millions of orders together and you have a complex resource allocation problem. If you are willing to have everyone send their requirements to a centralized auctioneer then, maybe, you can solve this problem.

However, what it you don't want to trust, or can't afford to pay a centralized auctioneer? We are studying automated incentive-compabile negotiation protocols for distributed resource allocations. I know this is a mouthfull but all it is saying is that we are developing algorithms that automated agents can use to negotiate with each other and solve these type of problems. Think of it as moving all buy/sell transaction to the web (we are nearly there) and then using software to decide who to buy what from and at what price. We can show that this will lead to more efficient solutions, that is, everybody wins.

Anyway, our latest paper detailing our efforts is
  • Benito Mendoza and José M. Vidal. Approximate Bidding Algorithms for a Distributed Combinatorial Auction (Short Paper). Padgham and Parkes and Müller and Parsons ed.In Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems, May; 2008.
    Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions (CAs). However, most of the existing winner determination algorithms (WDAs) for CAs are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work. The pausebid bidding algorithm generates myopically-optimal bids for agents in a PAUSE auction but its running time is exponential on the number of bids. We present new approximate bidding algorithms that not only run in linear time but also increase the utility of the bidders as result of small decrement in revenue.

Check it out, approximation mechanism let ut achieve 1000-fold speedups at only a small cost in utility.

Tuesday, January 8, 2008

A NetLogo Introduction

I will be speaking tomorrow Thursday January 10 6:30pm at the Columbia Linux User's Group about NetLogo. It will be an informal introduction. I will try to show why it is a great language for both learning to program and for building simulations that can be used to engage students in active learning of other subjects: such as physics, chemistry, economics, sociology, etc. The meeting is open to all.

Tuesday, October 23, 2007

Combinatorial Auction Model

One can visualize a combinatorial auction (with OR bids only) as a graph with two types of nodes: bids and items. Each bid node is labelled with the price of the bid and connected to all the items it is over. This is the visualization I have implemented in my combinatorial auction NetLogo model. It implements the branch-and-bounds algorithm on the branch-on-bids tree as found in my textbook.