Tuesday, November 20, 2007

Twitter

Imagine a Blog on which people are constantly updating their current activities. Twitter is a small application that allows a person to post what he/she is doing or thinking right now. It is a strange hybrid between Blog and Instant Message. It has become most popular as a widget that people add to their Facebook or MySpace web pages. Since many people uses these pages as their web home page and the basis for communications with friends or associates, they find tools like this useful for sharing real-time information. This is an application that does not really make sense to a traditional worker who does not use Facebook or have time to be constantly updating their personal status. But one interesting application might be to create remote sensors that are constantly posting their data through Twitter. Anyone interested in the information from that sensor would subscribe to its “twit” and be able to see what is happening. A user would probably subscribe to multiple “twits” and look for an aggregating tool that can pull all of these into a unified picture. In this case the interesting part of Twitter is the publish/subscribe feature over the internet.

Labels: , , ,

The Curse of Knowledge

The Curse of Knowledge was introduced by Chip Heath, a Stanford Economics professor, in the book Made to Stick. He explains why the more people learn, the less capable they become at communicating what they know.

“People tend to think that having a great idea is enough, and they think the communication part will come naturally. We are in deep denial about the difficulty of getting a thought out of our own heads and into the heads of others. It’s just not true that, ‘If you think it, it will stick.’

And that brings us to the villain of our book: The Curse of Knowledge. Lots of research in economics and psychology shows that when we know something, it becomes hard for us to imagine not knowing it. As a result, we become lousy communicators. Think of a lawyer who can’t give you a straight, comprehensible answer to a legal question. His vast knowledge and experience renders him unable to fathom how little you know. So when he talks to you, he talks in abstractions that you can’t follow. And we’re all like the lawyer in our own domain of expertise.

Here’s the great cruelty of the Curse of Knowledge: The better we get at generating great ideas—new insights and novel solutions—in our field of expertise, the more unnatural it becomes for us to communicate those ideas clearly. That’s why knowledge is a curse. But notice we said “unnatural,” not “impossible.” Experts just need to devote a little time to applying the basic principles of stickiness.
JFK dodged the Curse with ‘put a man on the moon in a decade’. If he’d been a modern-day politician or CEO, he’d probably have said, ‘Our mission is to become the international leader in the space industry, using our capacity for technological innovation to build a bridge towards humanity’s future.’ That might have set a moon walk back fifteen years.”

Labels: , , ,

Thursday, November 8, 2007

Machinima and Training

I previously described the basics of Machinima. Since this art form uses the same tools that computer games are based on, it is possible to create digital movies that exactly match the look, feel, and capabilities of a game. This would be particularly useful in building a pre-exercise movie that explains the situation and the mission that is to be conducted in a game/simulation. The same tools could be used to record the execution of the in-game mission and then that those movies in the AAR. A 3D window into the exercise from multiple perspectives would be a much more powerful learning tool than an outbrief based on bullet points and a 2D schematic of unit movement.

From an entertainment perspective, Machinima may be a better medium for creating Internet-based “television” programming. This form falls somewhere between filming live actors and traditional animation. One set of authors offers a list of the Top 10 Machinima films that have been created. Watch them at home:

Labels: ,

Microsoft XNA for Serious Games

Microsoft recognized that it was difficult for game studios to create both a PC and an Xbox version of every game. So they created the XNA Framework which allows a team to create a single code base that can be compiled for either the PC or the Xbox without changes to the code. The framework provides a great deal of the functionality needed for a game (similar to a game engine, but not the same breadth of capabilities). Microsoft has released all of this code to the public so that it can be downloaded and used by anyone to create a game (specifically first-person shooters and real-time strategy games). The games developed by amateur users can be compiled to run on either the PC or the Xbox and is an effective way to turning every aspiring game programmer into an Xbox developer – similar to the approach that they took in promoting DirectX over OpenGL ten years ago. Potentially, a serious game developer for the Army could use the XNA Framework to create a military game that is ready for either the PC or Xbox. We have not seen any defense contractors working with XNA yet. However, in order for the game to run on the Xbox, the developer must get a licensing code from Microsoft. Currently, Microsoft has made it clear that they intend to give such licenses to games that fit well into their Xbox Live (online) family of games. They are not interested in seeing XNA used to create serious games, though that might change in the future.

Labels: , ,

Desktop Game Client

Delivering simulations/games to the desktops of every soldier in the Army poses a number of challenges, one of which is the ability of the soldier on the receiving end to install a new application on his G-6 provided computer. Most users receive locked-down machines that do not allow any additional installation. But a library of games would require that the user have some ability to do something like installing a module/application unique to his needs. This is similar to the delivery of Flash-based or Java applet content in a web browser. If the Flash player is installed in the browser, then all Flash content that follows can be loaded and run without system admin privileges. To deliver game content to a controlled user, we need a trusted client application playing a role similar to the browser, while all games are handled similar to Flash content. If every game-based training app was built on a single game engine (such as RealWorld), then the specific content could be delivered as data (like a new level in a game). However, physics additions to a game engine would still require the addition of new code (as in a dynamically linked library). It is unlikely that all applications will come from a single game engine.

Universal distribution of simulation software in the military is going to require either installation by the system admin or the creation of a new kind of game client manager that can handle game applications in a manner similar to a Flash file or a Java applet.

Labels: ,

Humans as Computing Devices

Louis von Ahn is a professor at Carnegie Mellon University. He has been studying the use of games to motivate people to do useful work. He refers to his tools as “Games with a Purpose” - very similar to “Serious Games”. However, his programs focus on image recognition and categorization. They create a playful environment in which two players compete to identify what is shown in a picture. The descriptions they type into the game are captured on a server and become the text descriptions of the images. Each image is used in a number of game rounds to validate that the descriptors applied are agreed upon by multiple players. In his experiments he finds that people played this game for many hours straight – some as many as 12 hours/day. My own experiments with my children showed the same engaging behavior with the games. Using his game and the number of players he has attracted he estimates that he could create tags for all of the images in Google Images in just 5 weeks.

This idea is huge. It uses a gaming environment to motivate people to do valuable work – for free. In the military it might be used to categorize all of the intelligence/reconnaissance imagery on file. It could also be used to train people to identify what is in the images.

I highly recommend watching his lecture (51 minutes) and playing the two games he has designed.

Labels: , , , ,

Monday, November 5, 2007

Deploying HPC for Interactive Simulation

Deploying HPC for Interactive Simulation

Panelists:
Roger Smith, CTO, U.S. Army Simulation and Training
Brian Goldiez, Deputy Director, UCF Institute for Simulation and Training
Dave Pratt, Chief Scientist, SAIC Simulation
Robert Lucas, Division Director, USC Information Sciences Institute
Eng Lim Goh, CTO, SGI

Introduction

The community of academia, industry, and government offices that are leading the development of new interactive simulations for training and analysis are reaching a point at which the application of traditional networks of computing assets are no longer able to support simulation scenarios of sufficient scope, breadth, and fidelity. Several organizations are turning to high performance computing in the form of clusters and shared memory machines to create a flexible computing platform that is powerful enough to run realistic models of military activities and very large scenarios as a matter of course. This BOF will discuss the problem-space and experiments that have been conducted in applying HPC’s to this domain.

HPC Application to Interactive Simulation

Army Interactive Simulation (Roger Smith)

The Army training and simulation community, which includes TRADOC and PEO STRI are currently limited in their ability to provide systems and opportunities to units to train from locations that are remote from existing training facilities. These limitations are driven by historical limits on technology and our ability to design systems and training events that can be initiated at the request of a unit that wants to be trained. However, advances in networking, computing, and distribution services have created an opportunity for us to design systems which can be hosted at a powerful central facility, but which can be accessed, configured, and operated by remote units that need to be trained.

We are exploring the ability to configure an HPC as a central server for simulation-based training. A OneSAF On-demand HPC Training Center will be an “always on”, network accessible, remotely configurable service for training. It will provide access to training in a manner similar to that delivered by vendors like Sun Microsystems’ Grid Compute Utility or Amazon.com’s Elastic Compute Cloud. Both of these make hardware available on demand as a service to business customers. Portions of an HPC will be configured to make the hardware available to up to 200 simultaneous units for training, but with the OneSAF software and scenario databases installed. The graphic display of the activities will run as clients at the customer’s location. Configuring such a system will require tackling several issues regarding provisioning of machines to specific customers, loading or modifying scenarios for each customer, and providing interactive stimulation of a large number of external client machines. The training organizations listed above have been exploring these issues on a smaller scale for a number of years. The availability of HPC dedicated hardware coincides with the FY07 release of OneSAF 1.0 and the maturing of a number of smaller projects to allow us to take the next step in HPC-enabled, always-on, remotely accessible training.

Physics-Based Environment for Urban Operations (Dave Pratt)

You are invited to attend a capability demonstration. The description that follows is the material of SAIC and is not endorsed or validated by JFCOM. Please feel free to forward this invitation to anyone you feel should attend. Unfortunately, due to the venue, only those with US SECRET clearances will be able to attend. I also have a PPT with additional information but will not attach it here as it is 9MB. Please contact me by email if you'd like for me to send it to you.

As part of the DoD High Performance Computing Modernization Program's (HPCMP) efforts to provide support for the warfighter and demonstrate the effectiveness of HPC class resources, a Mini-Portfolio has been established to demonstrate the applicability of physics-based modeling in realistic mission planning and scenario analysis. The Mini-Portfolio sets a new direction for DoD M&S by integrating traditional high-fidelity computationally intensive models into operationally relevant scenarios. In doing so, we aim to advance the science by combining the physical, logical, and behavior models that enable us to better understand military-relevant operations and their consequences in context (e.g. IEDs, urban combat, smoke, loss of signal), at high resolution and fidelity. By showing the effects of realistic enhancements to the simulation of operationally relevant urban environments that are made possible through the introduction of first order physics models in to the simulation, we increase both the believability and usefulness of the models and simulations. Improved simulation accuracy is achieved by extending existing simulation architecture to support selected traditional HPC level models. We have demonstrated the relevance and effect of the scientifically valid models within the warfighter context. To date, we have integrated into the simulation context OneSAF the C4I (Scalable Urban Network Simulation (SUNS)) and aerosol particulate transport (CT-Analyst) high fidelity models. The end result of this portfolio will be a system where additional HPC researchers can demonstrate the effects of their computational advances in a warfighter relevant environment.

Critical Questions (Brian Goldiez)

High Performance Computing, characterized by 64 bit word length, MPI, high speed interconnects, large amounts of local and spinning memory, and appropriate operating system features (e.g., load balancing) has typically been reserved for batch processing. Also, HPC machines are typically procured for a specific class of problem that might need large amounts of cache, other memory, CPU cycles, or inter-processor communications. Interactive computing brings a potential new challenge with respect to end-to-end system latency and runtime parameter change where end-to-end implies an input from a user and output to a user in real time (say 30Hz) and parameter change implies changing input variables during runtime.

It is not clear what type of architecture best supports interactivity and allows accurate physical and behavioral representations of ever growing numbers of interacting entities in a virtual environment. More specific issues that need to be addressed include;

1. How can inter-core and inter-node communications be mapped with various types of interactive simulation needs?
2. What strategies exist or should be created to partition various interactive simulations such as models of terrain or computer controlled avatars that interact require interactions with human users?
3. Are special I/O devices and interconnects needed distribute user inputs and integrate system outputs to facilitate interactivity?
4. Are existing operating systems used in HPC’s appropriate for interactivity especially where fixed update rates may be needed?
5. How will interactive application scale with various HPC architectures and operating systems?
6. Most batch processing users of HPC build from existing commercially available HPC applications (e.g., MatLab for parallel machines). Are groups working on porting existing interactive applications to HPC platforms? If so, what techniques are being used? If not, how should the process be catalyzed?
Addressing these issues are relevant to the military, homeland security, events where large numbers of crowds of people are expected to interact, massively multi-player game environments, etc.

Opportunities in HPC (Robert Lucas)

There is a continued call for better training, evaluation and analysis, where better means faster, cheaper, more available, and of improved realism and validity. Those of us in the High Performance research community see disparate groups working on issues of common concern and universal utility. Experiments such as Urban Resolve have successfully used Linux clusters to host large ensembles of agents interacting in real time with human users. Now we seek to use forces modeling technology and HPC techniques to enhance both real time analysis of intelligence information and to apply the lessons learned from our simulations to the real World. All of these uses will only be truly effective if interactive HPC becomes a readily available tool. Such use will require a number of adjustments and concessions to implement an interactive environment in a "batch-processing" world. An outline of the challenges facing, as well as the opportunities afforded to, interactive HPC will be presented

HPC Architectures (Eng Lim Goh)

“Today's methods of scientific and engineering investigation range from theoretical, experimental to computational science. In computational science, the classical approach has been modeling and simulation. The concern here is the growing gap between actual applications and peak compute performances. We believe one major solution to this growing performance gap is the new multi-paradigm computing architecture. It tightly integrates, what were previously, disparate computing architectures into a highly scalable single system and, thus, allows them to cooperate on the same data residing in scalable globally-addressable memory. Enabling scientists to focus on science, not computer science.

“Additionally, with globally-addressable memory growing to Terascale sizes, a plethora of new, huge-memory applications that profoundly improve scientific and engineering productivity will come on line. From these, may emerge a new branch of computational science called data intensive methods. It includes the traditional method of query, to the more abstract methods of inference and even interactive data exploration. The availability of such a powerful range of interactive methods, for operation on Terascale data sets, all residing in monolithic globally-addressable memory, is a novel combination that will not only facilitate intended discoveries but may also give rise to a new complement which I will call 'planned serendipity'. The latter will be of growing significance in intelligence, science and engineering. And as the amount of data generated by faster and more productive systems grows, visualization will increasingly become an essential tool. The recent advances in display and related technologies, could pave the way for revolutionary new ways of visual, interactive and collaborative communications.”

“SGI’s focus on memory management stems from us seeing a rising concern from our government customers with the deluge of data they are getting. For various reasons, they are not able to exploit that data effectively. So what we started on is how we could leverage our current architecture to accelerate knowledge discovery.

“What we did was tinker with the idea of putting an entire database in memory. NUMAlink allows multiple nodes to be tied tightly together, so that all the memory pieces are seen as one. Once the processors can see all the memory across all nodes as a single memory, then they can load a large database entirely into that memory. So a complex query that would normally take seconds to return a response—because the disk query takes some time to scan the database—could be returned in under a second. When we went out with the idea, we got enthusiastic responses. We heard how it could fundamentally change the discovery process. When you ask questions with complex queries, you sit and wait for a response. It breaks the thinking process, because you might want to converge on an idea by quickly firing off questions and getting quick responses. You want to have a conversation with the data.”

SC07 Web Site Link: http://sc07.supercomputing.org/schedule/event_detail.php?evid=11317

Labels: , ,