IT logo, Information Technology, University of OklahomaPhoto of City Skyline

Oklahoma Supercomputing Symposium 2008Oklahoma Supercomputing Symposium 2008


OSCER

OU IT

OK EPSCoR

Great Plains Network


Table of Contents

Other speakers to be announced


KEYNOTE SPEAKER

José Muñoz
José Muñoz
Deputy Office Director/Senior Scientific Advisor
Office of Cyberinfrastructure
National Science Foundation

Topic: "High Performance Computing and Cyberinfrastructure Activities at the National Science Foundation"

Slides: available after the Symposium

Talk Abstract

The National Science Foundation (NSF) has a long history of supporting High Performance Computing (HPC) and making the technology available to the open science and engineering communities. The NSF Cyberinfrastructure Vision document presents other CI components that are meant to complement the HPC investments and create an environment consistent with the needs of the 21st century. This presentation will discuss where we are in the HPC area as well as the other CI Vision areas, in particular the new activities in Data as well as where more work is required in order to achieve the CI Vision.

Biography

José Muñoz is Deputy Director of the Office of Cyberinfrastructure (OCI) at the National Science Foundation. Prior to coming to NSF in February 2004, Dr. Muñoz was Director of Simulation and Computer Science for the Advanced Simulation and Computing program at the US Department of Energy (DOE)/National Nuclear Security Administration (NNSA), and was at the Defense Advanced Research Projects Agency (DARPA) prior to DOE. Dr. Muñoz received his PhD in Computer Science from the University of Connecticut in 1984, and his BSc in Mechanical Engineering from New York University in 1967.


PLENARY SPEAKERS

Henry Neeman
Henry Neeman

Director
OU Supercomputing Center for Education & Research (OSCER)
University of Oklahoma

Topic: "OSCER State of the Center Address"

Slides:   PowerPoint   PDF

Talk Abstract

The OU Supercomputing Center for Education & Research (OSCER) celebrated its 7th anniversary on August 31 2008. In this report, we examine what OSCER is, what OSCER does, and where OSCER is going.

Biography

Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research and an adjunct assistant professor in the School of Computer Science at the University of Oklahoma. He received his BS in computer science and his BA in statistics with a minor in mathematics from the State University of New York at Buffalo in 1987, his MS in CS from the University of Illinois at Urbana-Champaign in 1990 and his PhD in CS from UIUC in 1996. Prior to coming to OU, Dr. Neeman was a postdoctoral research associate at the National Center for Supercomputing Applications at UIUC, and before that served as a graduate research assistant both at NCSA and at the Center for Supercomputing Research & Development.

In addition to his own teaching and research, Dr. Neeman collaborates with dozens of research groups, applying High Performance Computing techniques in fields such as numerical weather prediction, bioinformatics and genomics, data mining, high energy physics, astronomy, nanotechnology, petroleum reservoir management, river basin modeling and engineering optimization. He serves as an ad hoc advisor to student researchers in many of these fields.

Dr. Neeman's research interests include high performance computing, scientific computing, parallel and distributed computing, structured adaptive mesh refinement and scientific visualization.

Michael Mascagni
Michael Mascagni

Professor
Department of Computer Science
Florida State University

Plenary Topic: "Random Number Generation: A Practitioner's Overview"

Slides:   PDF

Plenary Talk Abstract

We will look at random number generation from the point-of-view of Monte Carlo computations. Thus, we will examine several serial methods of pseudorandom number generation and two different parallelization techniques. Among the techniques discussed with be "parameterization," which forms the basis for the Scalable Parallel Random Number Generators (SPRNG) library. SPRNG was developed several years ago by the author, and has become widely used within the international Monte Carlo community. SPRNG is briefly described, and the lecture ends with a short revue of quasirandom number generation. Quasirandom numbers offer many Monte Carlo applications the advantage of superior convergence rates.

Breakout Topic: "Novel Stochastic Methods in Biochemical Electrostatics"

Slides:   PDF

Breakout Talk Abstract

We will present a Monte Carlo method for solving boundary value problems (BVPs) involving the Poisson-Boltzmann equation (PBE). Such BVPs arise in many situations where the calculation of electrostatic properties of solvated large molecules. The PBE is one of the implicit solvent models, and has accurately modeled electrostatics over a wide range of ionic solvent concentrations. With the new method we compare the algorithmic and computational properties of this algorithm to more commonly used deterministic, techniques, and we present some computational results. This work is part of an ongoing collaboration with several Florida State University faculty members, students, and collaborators at the Russian Academy of Sciences and at the University of Toulon and Var.

Biography: coming soon

Stephen Wheat
Stephen Wheat

Senior Director, High Performance Computing
Intel

Topic: "Insatiable versus Possible: Challenges on the Road to ExaScale"

Slides:   PDF

Talk Abstract

With the recent establishment of computational science as an equal third peer to theory and experimentation in the advancement of science and engineering, the appetite for computational capability has become insatiable. There seems to be no end to the rate of growth for the computational capability of the leading systems deployed annually around the world. Indeed, maintaining parity with the competition, let alone establishing and retaining leadership, has institutions continually on the move regarding their own computational capacity. In this talk, we will address how Moore's law plays into this phenomenon, what other observations we can make about the growth rate, and what barriers have been identified regarding continued growth over the next ten years. We'll address the implications of resolving these barriers. And we'll close with making the challenges and their resolutions personal to the individuals in the audience.

Biography

Dr. Stephen Wheat is the Senior Director of Intel's High Performance Computing Platform Organization. He is responsible for the development of Intel's HPC strategy and the pursuit of that strategy through platform architecture, software, tools, sales and marketing, and eco-system development and collaborations.

Dr. Wheat has a wide breadth of experience that gives him a unique perspective in understanding large scale HPC deployments. He was the Advanced Development manager for the Storage Components Division, the manager of the RAID Products Development group, the manager of the Workstation Products Group software and validation groups, and manager of the systems software group within the Supercomputing Systems Division (SSD). At SSD, he was a Product Line Architect and was the systems software architect for the ASCI Red system. Before joining Intel in 1995, Dr. Wheat worked at Sandia National Laboratories, performing leading research in distributed systems software, where he created and led the SUNMOS and PUMA/Cougar programs. Dr. Wheat is a Gordon Bell Prize winner and has been awarded Intel's prestigious Achievement Award. He has a patent in Dynamic Load Balancing in HPC systems.

Dr. Wheat holds a Ph.D. in Computer Science and has several publications on the subjects of load balancing, inter-process communication, and parallel I/O in large-scale HPC systems. Outside of Intel, he is a commercial multi-engine pilot and a certified multi-engine flight instructor.


BREAKOUT SPEAKERS

Joshua Alexander

HPC Application Software Specialist
OU Information Technology
University of Oklahoma

Topic: "Implementing Linux-enabled Condor in Multiple Windows PC Labs"
(with Horst Severini)

Slides:   PowerPoint   PDF

Talk Abstract

At the University of Oklahoma (OU), Information Technology is completing a rollout of Condor, a free opportunistic grid middleware system, across 775 desktop PCs in IT labs all over campus. OU's approach, developed in cooperation with the Research Computing Facility at the University of Nebraska Lincoln, provides the full suite of Condor features, including automatic checkpointing, suspension and migration as well as I/O over the network to disk on the originating machine. These features are normally limited to Unix/Linux installations, but OU's approach allows them on PCs running Windows as the native operating system, by leveraging coLinux as a mechanism for providing Linux as a virtualized background service. With these desktop PCs otherwise idle approximately 80% of the time, the Condor deployment is allowing OU to get 5 times as much value out of its desktop hardware.

Biography

Joshua Alexander is a Computer Engineering undergraduate at the University of Oklahoma. He currently works with the Customer Services division of OU Information Technology, and also serves as an undergraduate researcher for the OU Supercomputing Center for Education & Research (OSCER). His current project for OSCER involves both the OU IT Condor pool and development of software tools for deploying Condor at other institutions.

John Antonio
John Antonio

Professor
School of Computer Science
University of Oklahoma

Topic: "Reconfigurable Versus Fixed Versus Hybrid Architectures"

Slides:   PowerPoint   PDF

Talk Abstract

Until recently, the use of reconfigurable computing has been limited primarily to embedded High Performance Computing (HPC) applications, for example, signal processing applications having extremely high-throughput data streams and intensive computational requirements. However, reconfigurable computing technology is now finding its way into systems used to support broader applications in the realm of HPC. This talk will include a brief overview of reconfigurable computing and also provide rationale for why reconfigurable computing is becoming more universally viable. An overview of the tools and techniques required to harness the full potential of reconfigurable computing resources will be provided. The talk will then focus on architectures that make use of both fixed and reconfigurable computational resources. An example hybrid multi-core architecture will be described that can be configured to optimally support a wide variety of computational requirements, ranging from independent threads to massively parallel applications requiring intensive inter-processor communications.

Biography

John K. Antonio is Professor of Computer Science at the University of Oklahoma. He received his BS, MS, and PhD degrees in Electrical Engineering from Texas A&M University. From 1999 to 2006, he served as Director and Professor of Computer Science at OU, and from 2006 to 2008 he was Director of the Institute for Oklahoma Technology Applications (IOTA) at OU. Before joining OU, he was on the faculty of Electrical and Computer Engineering at Purdue University, and he was also on the faculty of Computer Science at Texas Tech University. Dr. Antonio is a senior member of the Institute of Electrical and Electronics Engineers (IEEE), a member of the Association of Computing Machinery, and an elected member of the European Academy of Sciences. He is Associate Editor of the journal IEEE Transactions on Computers. His academic research interests include: embedded high performance computing; low-power and power-aware computing; reconfigurable computing; parallel and distributed computing; and cluster computing. Dr. Antonio has co-authored over 90 publications and reports in the above and related areas. Numerous agencies and companies have supported his research over the years. He has been PI or Co-PI on more than twenty sponsored research projects totaling more that $2M. In his role as Director of IOTA, he managed and helped grow a diverse research portfolio having over $7M in expenditures and over $20M in-force in annual funding.

Keith Brewster
Keith Brewster

Senior Research Scientist
Center for Analysis & Prediction of Storms
University of Oklahoma

Topic: "Using the LEAD Portal for Customized Weather Forecasts on the TeraGrid"

Slides:   PowerPoint   PDF

Talk Abstract

The Linked Environments for Atmospheric Discovery (LEAD) portal is a web tool for accessing and visualizing weather data, providing instruction in meteorology, and conducting numerical weather forecasting experiments. As part of the Storm Prediction Center's annual Spring Program, the Center for Analysis and Prediction of Storms (CAPS) used the LEAD portal to generate customized high resolution numerical weather forecasts that focused on the region under threat for that day. The forecasts ran using the BigRed TeraGrid machine at the Indiana University, and the interaction with the TeraGrid resources was handled seamlessly by the LEAD Portal. Post-processing was done on TopDawg of OU Supercomputing Center for Education & Research (OSCER) and results were available in real-time for the experiment forecasters to discuss in the daily weather briefing.

Biography

Keith Brewster is a Senior Research Scientist at the Center for Analysis and Prediction of Storms at the University of Oklahoma and an Adjunct Associate Professor in the OU School of Meteorology. His research involves data assimilation of advanced observing systems for high resolution numerical weather analysis and prediction, including data from Doppler radars, satellites, wind profilers, aircraft and surface mesonet systems. He earned an M.S. and Ph.D. in Meteorology from the University of Oklahoma and a B.S. from the University of Utah.

Dana Brunson
Dana Brunson

Senior Systems Engineer
High Performance Computing Center
Oklahoma State University

Topic: "Birds of a Feather Session: So You Want to Deploy a Production Cluster"
(with Jeff Pummill)

Slides:   PDF

BoF Abstract:

This BoF is intended as an introduction to the many components that make up a contemporary cluster environment. The presentation and accompanying discussion will address topics such as: how to choose hardware type(s), the various software stacks available, pros and cons of the various applications that are used on clusters, administrative tips and tricks, user support advice, and hopefully a lively debate at the end. This BoF is not intended to define what should and should not be deployed; rather, we will present the many factors and considerations involved in deploying a successful cluster, and we will outline the various rewards and pitfalls along the way.

Biography

Dana Brunson oversees the High Performance Computing Center at Oklahoma State University Before transitioning to High Performance Computing in the fall of 2007, she taught mathematics and served as systems administrator for the Mathematics department at OSU. She earned her Ph.D. in Numerical Analysis at the University of Texas at Austin in 2005 and her M.S. and B.S. in Mathematics from Oklahoma State University.

Karen Camarda
Karen Camarda

Associate Professor
Department of Physics & Astronomy
Washburn University

Topic: "Supercomputing at a Non-PhD Granting Institution"

Slides: PDF

Talk Abstract:

With funds from an internal "Innovation Grant," faculty members at Washburn University are developing a High Performance Academic Computing Environment (HiPACE). HiPACE was proposed to enrich and enliven the science and technology education of all Washburn students, to support the scholarly activities of Washburn faculty, students, and staff, and to open outreach opportunities by making advanced technology training available to K-12 educators. In this talk, I will discuss the progress being made in fulfilling the goals of HiPACE, as well as some difficulties that have arisen.

Biography

Dr. Karen Camarda is an associate professor in the Department of Physics & Astronomy at Washburn University. She received her BS in physics from the University of California San Diego in 1991, her MS in physics from the University of Illinois at Urbana-Champaign in 1992, and her PhD in physics from UIUC in 1998. Dr. Camarda serves as the chair of the steering committee for Washburn University's High Performance Academic Computing Environment (HiPACE). Her research interests lie in the field of numerical relativity.

Wesley Emeneker

Graduate Research Assistant
Department of Computer Science & Computer Engineering
University of Arkansas

Topic: "Cluster Scheduling: Making Everybody Happy All the Time (Yeah Right!)"

Slides: available after the Symposium

Talk Abstract

Scheduling cluster jobs is more art than science. Policies designed to run as many jobs as possible negatively impact large jobs. Policies designed to accommodate large jobs work at the expense of decreased utilization. Furthermore, users figure out how to work the system to make their jobs run as soon as possible. In this talk, we look at a few scheduling policies, and see how they affect both users and jobs. Additionally, we look at some research being done at the University of Arkansas to predict future workloads based on current data.

Biography

Wesley Emeneker is a Ph.D. student at the University of Arkansas Fayetteville. He is the principal architect of Dynamic Virtual Clustering, a system that uses virtual machines to run cluster jobs. His current research is looking at describing exactly how virtual machines impact application performance, and how this knowledge can be applied to batch scheduling to improve the cluster "experience" for users and jobs.

Jeni Fan
Jeni Fan

Graduate Research Assistant
Department of Psychology
University of Oklahoma

Topic: "Integrating Bayesian Inference with Semantics"

!<---

Slides: available after the Symposium   PDF

--->

Talk Abstract

HyGene (Thomas, Dougherty, Sprenger, & Harbison, 2008) is a theory of hypothesis generation, evaluation and testing that bridges the gap between traditional theory and research in judgment and decision making with research in ecological and cognitive psychology. This presentation will briefly review HyGene, but will primarily focus on future research directions that require supercomputing. Specifically, we will discuss a project aimed at developing a model of hypothesis generation and judgment that merges our Bayesian-like hypothesis generation model (HyGene) with state-of-the-art high-dimensional models of semantic knowledge (e.g., Landauer & Dumais, 1997). Our goal is to develop a model of hypothesis generation and judgment that capitalizes on the power of Bayesian reasoning while exploiting the semantics inherent in natural language.

Biography

Born in Beijing, China, Jeni Fan moved to the Washington DC metro area at age 9. She earned her BS in Psychology at the University of Maryland in 2005 and her MS in Cognitive Psychology at the University of Oklahoma in 2008. Between these degrees, she worked for Bloomberg LP in New York City. Her areas of interest are Decision Theory and Behavioral Economics, focusing on (1) investigating the effects of context on decision making and option evaluation; (2) assessing the role of utility in hypothesis generation, evaluation, and search, as well as choice behavior; (3) working with the Department of Economics (Game Theory specifically) to establish an integration between classic economic theories and more behavioral decision making phenomena.

Robert Ferdinand
Robert Ferdinand

Associate Professor
Department of Mathematics
East Central University

Topic: "Finite Element Solution of a Groundwater Contaminant Model"

Slides:   PDF

Talk Abstract

The model presented takes the form of a coupled system of two nonlinear partial differential equations describing the dynamics of contaminant in groundwater flowing through fissures (cracks) in a rock matrix, leading to the contaminant traveling and diffusing along the length of the fissure and also into the surrounding rock matrix. This diffused contaminant can reach a water body, for example, that is being used as a drinking water source for local area residents and/or livestock, which can thereby cause a health hazard. A new feature of this model is an added dimension in the rock matrix diffusion term. A Galerkin finite method using a triangulation is used to approximate the model solution in L-2 norm. In the future, it is hoped that this scheme can be used to estimate model parameters using an inverse method procedure. Both solution and parameter approximation will use large amounts of computation.

Biography

Robert Ferdinand obtained his PhD in Applied Mathematics from the University of Louisiana in 1999. His areas of interest include mathematical modeling of physical and biological processes, in which numerical schemes are used to computationally approximate model solutions: for example, the inverse method is applied to numerically estimate model parameters, which involves substantial computing. His theoretical work involves perturbation techniques to investigate long-term behavior of model solutions.

Larry Fisher
Larry Fisher

Owner
Creative Consultants

Topic: "Careers in a Creative Destruction World"

Slides:   PowerPoint   PDF

Talk Abstract

Creative destruction occurs when a product of less quality and less cost replaces a product of higher quality due to convenience and/or attractiveness. Existing companies fall victim to creative destruction because they are too busy maintaining the status quo; they never see it coming. Several examples and stories of the impact of creative destruction will be provided, including potential creative destruction products such as buckyballs, carbon nanofibers, HHO gas, and UAVs. You will be encouraged to identify creative destruction events in the computer and software industry.

Biography

Larry Fisher is a retired state employee with over 30 years of management development and training experience. He has designed and taught courses nationally for the US Air Force; the US Postal Service; the Domestic Policy Association; the University of Oklahoma; Oklahoma State University; Wichita State University; Oklahoma Gas and Electric, the Municipal Electric Systems of Oklahoma; the states of Oklahoma, Kansas, South Dakota, Texas, and Ohio; and the Kiowa, Seminole, and Cheyenne/Arapaho Indian Nations. He administered statewide management training for the state of Oklahoma. For 16 years, he worked in a variety of professional positions for the University of Oklahoma. He is known nationally through memberships and as an officer in the American Society for Training and Development, the National Association for Government Training and Development, and the International Association for Continuing Education and Training. He currently teaches management and related topics to a variety of clients including the Keller Graduate School of Management at DeVry University, Municipal Electric Systems of Oklahoma, Tinker Air Force Base, Rose State College, and many others. He has a BS degree in Chemistry from Oklahoma State University and a Masters in Public Administration from the University of Oklahoma.

Dan Fraser
Dan Fraser

Senior Fellow
Computation Institute
University of Chicago

Topic: "What Happens When Cloud Computing Meets High Performance Computing"

Slides:   PowerPoint   PDF

Talk Abstract

The computing community is still struggling to comprehend not only how to fully realize the promise of cloud computing, but what that promise actually is. Even less understood is how cloud technology might work together with high performance computing (HPC). To explore these issues, the relationship between HPC computing and Cloud computing is considered, along with the value propositions provided by each. Insights from this analysis are then used to examine several possible HPC-Cloud scenarios. The Globus Toolkit is relevant to these discussions and will be introduced both as an enabler for Grid Technology and also as a provider of an open-source Amazon Elastic Compute Cloud-like capability.

Biography

Dan Fraser is a Senior Fellow at the Computation Institute at the University of Chicago. Currently he is PI of the "Real Time Analysis of Advanced Photon Source Data" project and is also Director of the Community Driven Improvement of Globus Software program for the National Science Foundation. Formerly he was the Senior Architect for Grid Middleware at Sun Microsystems and the creator of Sun's Technical Computing Portal. He has a PhD in Physics from Utah State University and over a decade of experience working with high performance science and commercial applications.

Roger Goff
Roger Goff

HPC Architect
Sun Microsystems

Topic: "Managing Mountains of Data in Large Scale HPC Systems"

Slides: PDF

Talk Abstract

One of the biggest challenges facing designers and users of HPC solutions today is managing the flow of the ever-increasing amount of data being processed. While parallel filesystems are maturing and are being used more broadly, they only solve part of the problem. A more holistic approach &mdash one that encompasses not only the needs for fast scratch space but the requirements for archival, visualization, and end users as well &mdash is required. A proven solution that fits today's environments will be presented, along with a look at future directions for HPC data management technologies.

Biography

Roger Goff (roger.goff@sun.com) is a HPC Architect in the Systems Practice at Sun Microsystems. Roger has been in the high performance technical computing business since 1987. His current interests include high performance computing clusters and storage solutions for large scale HPC solutions. Roger designed numerous systems that debuted in the top 100 systems of the Top 500 fastest supercomputers list, including four systems that debuted in the top twelve and two in the top five. Roger has presented multiple times at the Linux World Conference and Expo, HP Users Group, Dell Technology Summits and Sun Customer Engineer Conferences. Roger has M.S. and B.S. degrees in Computer Science from Virginia Tech.

Paul Gray
Paul Gray

Associate Professor
Department of Computer Science
University of Northern Iowa

Topic: "High Performance Computing in the Core Computer Science Curriculum"

Slides:   PowerPoint   PDF

Talk Abstract

Hardware chipset vendors aren't making single-core chips any more for end user systems, so why are we still focusing on single-core architectures in our computer science curriculum? High Performance Computing (HPC) is moving from multi-core to many-core; from traditional architectures to specialized processors such as GPUs.

This talk will discuss efforts actively focusing on infusing our undergraduate curriculum in a way that is commensurate with the momentum of the HPC community. We will also take a brief tour of the tools and software that are needed to prepare the next-generation of researchers.

Biography

Paul Gray is an Associate Professor of Computer Science at the University of Northern Iowa. He is the chair of the SC (SuperComputing) Conference Education Program and instructs summer workshops on parallel computing education with the Supercomputing Education program efforts. His current efforts combine the Open Science Grid and TeraGrid with educational endeavors that revolve around LittleFe bringing aspects of grid computing into the high school and undergraduate curriculum.

Tim Handy

Undergraduate Student
Department of Engineering & Physics
University of Central Oklahoma

Topic: "Computational Aspects of Modeling Fluid Flow in Micro-junctions"

Slides:   PDF

Talk Abstract

This talk will focus on the computational efforts that have been undertaken by a research group at the University of Central Oklahoma, along with collaborators at the University of Oklahoma, to model and simulate laminar flow in microtubes and junctions. In particular, the set of programs and scripts developed to automate various processes involved in the large number of computational fluid dynamic (CFD) runs required in this research project will be described. This discussion will allow other researchers facing similar problems some solution techniques in the area of automation of geometry production and CFD related issues.

Biography

Tim Handy is a senior engineering physics major at the University of Central Oklahoma. His current research interests include loss coefficients for laminar flow in microbifurcations and flow through porous media. His current plan is to attend graduate school in mechanical engineering or computational mathematics.

Takumi Hawa
Takumi Hawa

Assistant Professor
School of Aerospace & Mechanical Engineering
University of Oklahoma

Topic: "Nanoparticle Synthesis and Assembly from Atomistic Simulation Studies"

Slides:   PDF

Talk Abstract

Some 75% of chemical manufacturing processes involve fine particles at some point. Proper design and handling of these fine particles often makes the difference between success and failure. Careful attention to particle characteristics during the design and operation of a facility can significantly improve environmental performance and increase profitability by improving product yield and reducing waste. Fabrication of the desired size with a narrow size distribution, and desired structure, is seen as one of the major challenges in robust implementation of nanoscience to a nanotechnology. The two most obvious ways to control the size of primary particles grown from the vapor are either to change the characteristic collision time by dilution or to change the sintering time by changing particle temperature. In this talk, sintering of nanoparticle aggregates with the fractal dimension of 1 (wire), 1.9 (complex), and 3 (compact) is investigated using molecular dynamics simulations. The sintering times normalized by the primary particle diameter show a universal relationship that only depends on the number of particles in an aggregate and its fractal dimension. This result is found to be consistent with a continuum viscous flow mathematical model that we developed. The results for the sintering of arbitrary fractal aggregates can be approximated with a power law modification of the Frenkel viscous flow equation, to include a dependence on the number of particles in a fractal aggregate and fractal dimension. The role of surface passivation on the rate of nanoparticle sintering is also considered. The presence of hydrogen on the surface of a particle significantly reduces surface tension. In general, the entire sintering time of coated particles is about 3 to 5 times that of bare particles, and the viscous flow model describes the dynamics of sintering of coated particles. Also, the approach of hydrogen coating is applied to control the shape of the nanoparticle. Finally, electrostatically directed nanoparticle assembly on a field-generating substrate is studied. Brownian motion and fluid convection of nanoparticles, as well as the interactions between the charged nanoparticles and the patterned substrate, including electrostatic force, image force and van der Waals force, are accounted for in the simulation. Coverage selectivity is most sensitive to electric field, which is controlled by the applied reverse bias voltage across the p-n junction.

Biography

Takumi Hawa received his B.S., M.S., and Ph.D. in Aeronautical Engineering in 1994, 1997, and 1999 from Rensselaer Polytechnic Institute. During a postdoctoral fellowship at the Institute for Mathematics and Its Applications at the University of Minnesota between 1999 and 2001, he studied microfluidics and detonation. He joined the Center for NanoEnergetics Research, also at the University of Minnesota, developed in 2001, and researched hydrogen surface passivated silicon nanoparticles to control the size of the nanoparticles using Molecuar Dynamics Simulation. In 2003, he moved to the University of Maryland and the National Institute of Standards & Technology as a guest researcher and has been studying various passivation surfaces for nanoparticles, such as solid coating and SAM (polymer). In fall 2008, He became an Assistant Professor in the School of Aerospace & Mechanical Engineering at the University of Oklahoma, and currently studies energetic materials, AFM tip interaction with a substrate, and the phase stability of hydrogen-coated silicon nanoparticles as a function of size and shape.

Scott Lathrop
Scott Lathrop

Blue Waters Technical Program Manager for Education
Area Director for Education, Outreach & Training
TeraGrid

Topic: "HPC University"

Slides:   PowerPoint   PDF

Talk Abstract

HPC University (HPCU) is a virtual organization focused on high-quality, high-performance computing (HPC) learning and workforce development activities and resources. HPC University is designed to address the needs of a large and diverse community that includes: K-20 educators and students; undergraduate faculty and students; graduate, post-doc and senior researchers; administrators; and practitioners in all fields of study related to HPC. The content ranges from introductory computational science tools and resources, to petascale level performance of scientific research codes.

During the Symposium, there will be a discussion of the HPCU requirements analysis, implementation, and dissemination plans. There will be a question and answer period to solicit additional community input and foster increased collaboration and participation within the community. We invite all interested organizations to join in developing effective strategies for expanding and scaling-up the opportunities to best serve the computational science and HPC needs of research and education communities.

Biography

Scott Lathrop splits his time between being the TeraGrid Director of Education, Outreach & Training (EOT) at the University of Chicago/Argonne National Laboratory, and being the Blue Waters Technical Program Manager for Education for the National Center for Supercomputing Applications (NCSA). Lathrop has been involved in high performance computing and communications activities since 1986. Lathrop coordinates education, outreach and training activities among the eleven Resource Providers involved in the TeraGrid project. He coordinates undergraduate and graduate education activities for the Blue Waters project. Lathrop is Co-PI on the National Science Foundation (NSF) funded Computational Science Education Reference Desk (CSERD), a Pathways project of the National Science Digital Library (NSDL) program. Lathrop coordinated the creation of the SC07-10 Education Program through the SC Conference (Supercomputing 20XX) to assist undergraduate faculty and high school teachers with integrating computational science resources, tools, and methods into the curriculum.

Evan Lemley

Professor
Department of Engineering & Physics
University of Central Oklahoma

Topic: "Computational Aspects of Modeling Fluid Flow in Micro-junctions"

Slides:   PDF

Talk Abstract

This talk will focus on the computational efforts that have been undertaken by a research group at the University of Central Oklahoma, along with collaborators at the University of Oklahoma, to model and simulate laminar flow in microtubes and junctions. In particular, the set of programs and scripts developed to automate various processes involved in the large number of computational fluid dynamic (CFD) runs required in this research project will be described. This discussion will allow other researchers facing similar problems some solution techniques in the area of automation of geometry production and CFD related issues.

Biography

Evan Lemley received his BA in Physics from Hendrix College and MS and Ph.D in Engineering (Mechanical) from the University of Arkansas. His thesis work was focused on modeling and simulation of various neutron detectors. Post graduation Evan worked for the engineering consulting firm Black & Veatch in a group responsible for modeling coal power plants with custom written software.

In August 1998, Evan became an Assistant Professor in the Department of Engineering and Physics (formerly Physics) at the University of Central Oklahoma, and has been there since, teaching mechanical engineering, physics, and engineering computation courses. Early research at UCO was focused on neutron transport in materials. More recently, Evan has been involved in simulation of flow in microtubes and microjunctions and simulation of flow in porous networks.

William Lu
William Lu

Director, Industry Marketing
Platform Computing

Topic: "Using and Managing HPC Systems"

Slides:   PDF

Talk Abstract
Commodity hardware has become the affordable building block for HPC systems. Meanwhile, scalable systems are complex to use and to manage. Platform Computing leverages its 16 years of expertise to deliver an end-to-end, affordable HPC management solution to truly unleash the power of HPC.

Click here for more information.

Biography
As marketing director at Platform Computing, William Lu is focused on government, research, education, and the electronics industry. Due to his deep technical background and strong HPC experience, he also leads a team of architects at Platform to develop technical solutions. During his 13-year tenure at Platform Computing, William has worked in product development, professional services, systems engineering, and marketing. William also has four years of HPC experience at CERN and the University of Texas. William has a Ph.D. in high energy physics.

Kyran (Kim) Mish
Kyran (Kim) Mish

Director
Fears Structural Engineering Laboratory
Presidential Professor of Structural Engineering
School of Civil Engineering & Environmental Science
University of Oklahoma

Topic: "Virtual Canaries in Virtual Coalmines: Detecting Infrastructure Damage via Computational Engineering"

Slides: available after the Symposium

Talk Abstract

Monitoring the health of our shared transportation infrastructure is one of the most important research topics in civil engineering today, but this problem is complicated by the fact that real-world damage can quickly lead to collapse of infrastructure, with attendant loss of life and property. The Infrastructure Institute at the University of Oklahoma is currently using advanced simulation techniques from the field of computational engineering to design and construct custom sensor technologies specifically tailored to detect various forms of structural damage before they can cause serious problems. Examples applications using fatigue and other damage models will be presented, along with a general overview of the underlying technical problem and the attendant high-performance computing technologies utilized to solve it.

Biography: coming soon

Greg Monaco
Greg Monaco

Executive Director
Great Plains Network

Topic: "Roundtable: Great Plains Network Regional CI Planning (Initial Meeting)"

Slides: available after the Symposium

Roundtable Abstract

The Great Plains Network membership started with universities in 7 midwestern states, and has expanded to include universities in Iowa and Minnesota, as well as the state research and education network of Wisconsin. There is great potential to share resources and to collaborate across this wide area (see here, for example), and this roundtable will be used to come up with preliminary recommendations the region discuss potential new areas for future regional collaboration, pooling of resources to collaborate as well as to pool and share resources.

Biography

Dr. Greg Monaco has held several positions with the Great Plains Network since August 2000, when he joined GPN. He began as Research Collaboration Coordinator, and then was promoted to Director for Research and Education. Greg is currently the Executive Director of GPN. His resume can be found here.

Jeff Pummill
Jeff Pummill

Senior HPC Administrator
Arkansas High Performance Computing Center
University of Arkansas

Topic: "Birds of a Feather Session: So You Want to Deploy a Production Cluster"
(with Dana Brunson)

Slides: available after the Symposium

BoF Abstract:

This BoF is intended as an introduction to the many components that make up a contemporary cluster environment. The presentation and accompanying discussion will address topics such as: how to choose hardware type(s), the various software stacks available, pros and cons of the various applications that are used on clusters, administrative tips and tricks, user support advice, and hopefully a lively debate at the end. This BoF is not intended to define what should and should not be deployed; rather, we will present the many factors and considerations involved in deploying a successful cluster, and we will outline the various rewards and pitfalls along the way.

Biography
Jeff Pummill is the Senior HPC Administrator for the Arkansas High Performance Computing Center at the University of Arkansas. Prior to his position at the UofA, he spent 13 years in the fields of mechanical design and structural analysis, while also maintaining a large number of Unix workstations and a small Linux clusters used for Finite Element Analysis. His current areas of interest include hardware architectures, resource managers, compilers, and benchmarking tools. He is also the TeraGrid Campus Champion for the University of Arkansas.

Jeff Rufinus
Jeff Rufinus

Associate Professor
Sciences Division
Widener University

Topic: "ABINIT: An Open Source Code for Materials Scientists, Computational Chemists, and Solid State Physicists"

Slides: available after the Symposium

Talk Abstract

ABINIT is an open-source code that can be used to do ab-initio calculations on materials ranging from atoms to clusters to periodic structures. In this talk, the ABINIT code will be introduced. Examples will also be given.

Biography

Jeff Rufinus obtained his Ph.D. in physics from the University of Wisconsin-Madison. Jeff has been teaching computer science and computational science at Widener University for the past 8 years. His research interests include magnetic properties of diluted semiconductors, spintronics and nanotechnology.

Susan J. Schroeder
Susan J. Schroeder

Assistant Professor
Department of Chemistry & Biochemistry
University of Oklahoma

Topic: "Progress Towards Predicting Viral RNA Structure from Sequence: How Parallel Computing can Help Solve the RNA Folding Problem"

Slides:   PowerPoint2007   PowerPoint2003   PDF

Talk Abstract

As genome sequencing projects produce increasingly vast amounts of data, the need for tools to interpret genomic sequence information at a structural level becomes increasingly urgent. Ribonucleic acid (RNA) plays important roles in the processing, regulation, and transformation of genetic information in cells. RNA folds into three-dimensional structures and thus achieves specificity in molecular recognition and enzymatic activity. Many viruses have RNA genomes, such as HIV, flu, and hepatitis. Satellite tobacco mosaic virus is a small plant virus and a good starting point for improving predictions of viral RNA. Viral RNA changes conformation during replication, translation, and encapsidation. A single minimum free energy structure is inadequate to describe dynamic viral RNA structures. Incorporating experimental restraints from crystallography and chemical modification can improve prediction of different functional viral RNA structures. The Wuchty algorithm uses free energy minimization to calculate all possible structures for a given RNA sequence within a narrow energy increment. Modifications to the Wuchty algorithm include nucleotide-specific restraints from chemical modification experiments, global restraints from crystallographic data, and parallelization of the computation. These modifications will enable wider exploration of the RNA folding landscape. More information about viral RNA structures will enable rational design of small molecules or RNA interference strategies to inhibit viral genome encapsidation and propagation.

Biography

Susan Schroeder began exploring RNA energetics, structure, and function as an undergraduate in Dr. Douglas Turner's lab at the University of Rochester in Rochester NY. As a graduate student in Dr. Turner's lab, she continued to study the thermodynamic stability of RNA internal loops and the structure of these loops by nuclear magnetic resonance spectroscopy (NMR). As an NIH postdoctoral fellow in Dr. Peter Moore's lab at Yale University, she learned x-ray crystallography and molecular biology techniques while probing the structures of antibiotic resistance mutations in ribosomes and discovering new RNA binding sites of novel drugs that target ribosomes. As an assistant professor in the Department of Chemistry and Biochemistry at the University of Oklahoma she is now applying her diverse skills to the study of viral RNA structure, function, and energetics. She gratefully acknowledges research financial support from the following agencies: Pharmaceutical Research and Manufacturers of America Foundation, Oklahoma Center for the Advancement of Science and Technology Plant Science Research Program, Oklahoma University Health Science Center Institutional Research Grant from the American Cancer Society, and the Department of Chemistry and Biochemistry at the University of Oklahoma.

Horst Severini
Horst Severini

Research Scientist
Department of Physics & Astronomy
University of Oklahoma

Topic: "Implementing Linux-enabled Condor in Multiple Windows PC Labs"
(with Joshua Alexander)

Slides:   PowerPoint   PDF

Talk Abstract

At the University of Oklahoma (OU), Information Technology is completing a rollout of Condor, a free opportunistic grid middleware system, across 775 desktop PCs in IT labs all over campus. OU's approach, developed in cooperation with the Research Computing Facility at the University of Nebraska Lincoln, provides the full suite of Condor features, including automatic checkpointing, suspension and migration as well as I/O over the network to disk on the originating machine. These features are normally limited to Unix/Linux installations, but OU's approach allows them on PCs running Windows as the native operating system, by leveraging coLinux as a mechanism for providing Linux as a virtualized background service. With these desktop PCs otherwise idle approximately 80% of the time, the Condor deployment is allowing OU to get 5 times as much value out of its desktop hardware.

Biography

Horst Severini got his Vordiplom (BS equivalent) in Physics at the University of Wuerzburg in Germany in 1988, then went on to earn a Master of Science in Physics in 1990 and a Ph.D. in Particle Physics in 1997, both at the State University of New York at Albany.

He is currently a Research Scientist in the High Energy Physics group at the University of Oklahoma, and also the Grid Computing Coordinator at the Oklahoma Center for High Energy Physics (OCHEP), and the Associate Director for Remote and Heterogeneous Computing at OU Supercomputing Center for Education & Research (OSCER).

Dan Stanzione
Dan Stanzione

Director
High Performance Computing Initiative
Arizona State University

Topic: "A Scalable Framework for Offline Parallel Debugging"

Slides:   PDF

Talk Abstract

As clusters get larger, we have increasingly easy access to run jobs on thousands, tens of thousands, or even hundreds of thousands of cores. However, our ability to debug these jobs at scale has not kept up with the growth in hardware. In this talk, the GDBase framework for offline debugging will be presented. GDBase solves three problems in large scale debugging: (1) it integrates with batch systems to allow debugging jobs to be run without the need to interrupt production operation; (2) it moves debugging from online to offline, to reduce the amount of system time consumed; (3) it stores results in a database to allow automated analysis of the vast quantities of debugging data that large jobs can produce. GDBase has been used to date to debug runs of more than 8,000 MPI tasks.

Biography

Dr. Dan Stanzione, Director of the High Performance Computing Initiative (HPCI) at Arizona State University, joined the Ira A. Fulton School of Engineering in 2004. Prior to ASU, he served as an AAAS Science Policy Fellow in the Division of Graduate Education at the National Science Foundation. Stanzione began his career at Clemson University, where he earned his doctoral and master degrees in computer engineering as well as his bachelor of science in electrical engineering. He then directed the supercomputing laboratory at Clemson and also served as an assistant research professor of electrical and computer engineering.

Dr. Stanzione's research focuses on parallel programming, scientific computing, Beowulf clusters, scheduling in computational grids, alternative architectures for computational grids, reconfigurable/adaptive computing, and algorithms for high performance bioinformatics. Also an advocate of engineering education, he facilitates student research through the HPCI and teaches specialized computation engineering courses.

Bradley C. Wallet
Bradley C. Wallet

Research Scientist
ConocoPhillips School of Geology & Geophysics
University of Oklahoma

Topic: "GEON2 and the OpenEarth Framework (EOF)"

Slides:   PowerPoint   PDF

Talk Abstract: coming soon

Biography: coming soon

Dan Weber
Dan Weber

Computer Scientist
Software Group (76 SMXG)
Tinker Air Force Base

Topic: "Towards a Computationally Bound Numerical Weather Prediction Model"

Slides:   PowerPoint   PDF

Talk Abstract

Over the past 15 years, the vector processor based supercomputer has become virtually extinct in mainstream High Performance Computing (HPC). A few hardware vendors continue to supply these types of systems to niche users, but the larger HPC community, or at least the systems that most users have access to, are married to price/performance-minded scalar technology. During this HPC revolution, the weather community has watched the efficiency of the weather codes decrease, in terms of single processor performance, from nearly 90% (computationally bound) to 5% (memory bound) of peak floating point performance. The primary reason is that the time required to access Commodity Off-The-Shelf (COTS) main memory components has been outpaced by gains in CPU clock speed, core counts and vector lengths, and therefore in floating point processing speed. Future multi-core and many-core technology developments will exacerbate the memory boundedness of existing weather forecast models on large multi-core HPC solutions. As a result and in order to fully utilize upgrades in computing power, the meteorological community has had to consider the costly exercise of developing new methods for computing solutions efficiently on emerging hardware. If a weather forecast modeler could recoup some of the lost efficiency, one could improve the weather forecast by providing a higher resolution forecast in the same amount of wall clock time.

Our work focuses on what programmers need to consider in order to achieve the best possible performance for their application — in other words, to achieve CPU-bound status. We will review current solution techniques used to solve the meteorological equation set, stress the importance of reducing the memory footprint of data and increasing the ratio of computations to memory accesses. We will show results from a new model that uses these methods with a goal of sustaining a threefold increase in efficiency using large multi-core processors.

Biography

Dr. Dan Weber has 22 years of experience in numerical weather prediction and forecasting. His passion has been to build and optimize numerical weather prediction models for use in weather forecasting. In addition to performing research and writing numerous papers on thunderstorms and computer software optimization techniques targeted at massively parallel computers, he has taught courses in weather forecasting techniques and severe and unusual weather, and has held positions with the National Weather Service, at the University of Oklahoma (OU) and in private industry. Dr. Weber is currently employed at Tinker Air Force Base and is working to incorporate complex weather data into simulation systems using supercomputers.

Dr. Weber graduated with undergraduate and graduate degrees in Meteorology and Geology from the University of Utah and a doctoral degree in Meteorology from OU. His current research interests include high-resolution modeling of thunderstorms, aircraft turbulence, and urban weather, including airflow around urban structures. Dr. Weber has participated in several forensic weather projects and has supported the Korean Meteorological Administration's real-time weather forecasting efforts via the installation and optimization of a state of the art weather prediction system that he helped develop at OU.

Kenji Yoshigoe
Kenji Yoshigoe

Assistant Professor
Department of Computer Science
University of Arkansas at Little Rock

Topic: "Mining for Science and Engineering"

Slides:   PowerPoint   PDF

Talk Abstract

Current models of social networks fail to capture many of the intricacies of the rich complex structures that real world networks exhibit. These include issues such as community hierarchies and overlaps, horizontals across an organization with heterogeneous business units, node and edge attributes, types, multiplicity, location and capacity, etc. Most previous work on the analysis of networks has focused on a set of important but relatively simple measures of network structure. Through this work, we propose to focus on the increasing role of 'mining' / 'using' the voluminous data being generated across several applications for the pursuit of an holistic framework of methods, tools/toolkits and methodologies (from general-purpose to domain / application-specific customizations) to deal with the ever increasing amount of data that drives these applicative needs.

Biography

Kenji Yoshigoe is an assistant professor at the Department of Computer Science at the University of Arkansas at Little Rock (UALR). He received his Ph.D. in Computer Science and Engineering from the University of South Florida. He serves as a lab manager at UALR's High Performance Computing facility, currently being developed. His research interest is in performance evaluation of computer networks ranging from high-speed routers to wireless sensor networks.


OU Logo