Sun 12, Nov, 2017
9:00 am - 5:30 pm
Room 710, Colorado Convention Center
700 14th Street
Denver, CO, 80202
United States


Diversifying the HPC Community

The underrepresentation of women is a challenge that the entire supercomputing industry faces. Research shows that diverse teams increase productivity, so addressing the lack of gender diversity is as important to the community as reaching exascale. The HPC community is currently not even measuring how ‘leaky’ our pipeline is, but attrition rates likely align with the general tech community figures: 41% of women working in tech eventually end up leaving the field (compared to just 17% of men). This workshop is a key step in addressing one aspect of the underrepresentation of women: helping to retain the women that are already in the field and provide them with the tools to prosper and excel.

The workshop will provide activities of interest to two particular groups

  • Early and mid career women working in HPC who wish to improve their career opportunities.
  • Those responsible for hiring and recruiting staff that are interested in increasing diversity and retention of underrepresented groups in their organisation.

The workshop is open to everyone, not just women! As with all of our events we encourage participation from all in the community.


Workshop Speakers, Panelists and Chairs

Workshop Chair & Speaker: Toni Collis

Founder and Director, Women in HPC, EPCC at the University of Edinburgh

Toni Collis is an Applications Consultant in HPC Research and Industry, providing consultancy and project management on a range of academic and commercial projects at EPCC, the University of Edinburgh Supercomputing Centre.

Toni has a wide-ranging interest in the use of HPC to improve the productivity of scientific research, in particular developing new HPC tools, improving the scalability of software and introducing new programming models to existing software. Toni is also a consultant for the Software Sustainability Institute and a member of the ARCHER team, providing ARCHER users with support and resources for using the UK national supercomputing service as effectively as possible. In 2013 Toni co-founded Women in HPC (WHPC) as part of her work with ARCHER. WHPC has now become an internationally recognized initiative, addressing the under-representation of women working in high performance computing.

Toni is SC17 Inclusivity Chair and a member of the Executive committee for the conference. Toni is also a member of the XSEDE Advisory Board and has contributed to the organization and program of a number of conferences and workshops over the last five years including as an Executive Committee member of the EuroMPI 2016 conference and leading over ten WHPC workshops around the world.


HPC Security Expert, Department of Defense, USA

Before returning to her software roots as the HPC Software and Test Technical Director, Lee spent many of her 30 years within the Department of Defense as a technical expert in the Information Assurance field. In her current role, she has combined both areas of expertise to address the need for stronger yet performance friendly HPC security solutions.


Program Manager, Purdue University’s Rosen Center for Advanced Computing (RCAC) & Coordinator, XSEDE Campus Champions

Marisa Brazil is an outreach and broader engagement professional with over 10 years of experience in the higher education research community that utilizes HPC resources. Marisa has a dual-role as Program Manager at Purdue University’s Rosen Center for Advanced Computing (RCAC) and Coordinator of the NSF-funded XSEDE Campus Champions program. As Program Manager at Purdue’s RCAC, Marisa manages the Purdue Women in HPC program and as well as various social media, marketing, and outreach projects for the department. As Campus Champions Coordinator, she oversees a community-based program that connects and supports more than 400 cyberinfrastructure professionals who help their researchers identify and use the computational resources that best fit their needs. Marisa is part of the leadership team that coordinates the Champions broader engagement, sustainability and strategic planning efforts and manages the marketing and communications for the program.

Marisa received her B.A. in International Relations from the American University and her M.A. in Nonprofit Leadership and Management from Arizona State University. Prior to joining Purdue and the Campus Champions, Marisa worked at a variety of higher education and nonprofit institutions including Arizona State University, George Washington University, the National Education Association, IEEE Computer Society, and the American Heart Association.

Marisa possesses a long-held interest in championing and bringing awareness to diversity and inclusion and has spent the latter part of her career supporting this effort in the HPC and technology sectors.


Applications Consultant at EPCC, University of Edinburgh

Nick Brown is an applications consultant at Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh specializing in developing large scale HPC codes, parallel programming language design, and compilers. Nick is a STEM ambassador, is heavily involved with EPCC’s outreach program and with experience in engagement targeted toward school children (K-12.) He has also developed a number of successful public engagement activities from dinosaur racing, where the public can design their own dinosaurs which are then simulated and raced against each other to see who can create the fastest dinosaur, to the “design your own supercomputer” web-app where the public can create a supercomputer within a certain monetary and power budget to see who can execute the most operations within a certain timeframe. Nick is also heavily involved with teaching, developing courses, and supervising students, both on EPCC’s Masters program and also external training courses.


Southern California Earthquake Center (SCEC)

Scott Callaghan is a software developer at the Southern California Earthquake Center (SCEC) at the University of Southern California (USC). His research focuses on large-scale probabilistic seismic hazard analysis and the use of workflow technology in scientific applications. Scott received his master’s degree in High Performance Computing from USC in 2007.

Scott has a strong interest in encouraging awareness of and retention in HPC through education, outreach and mentoring. He explains HPC concepts through plastic balls in public outreach events in his hometown of St. Louis, and has served on the board of the SIGHPC Education chapter. Since 2011 he has been a staff member of the International HPC Summer School, and has served on the planning committee and as chair of the mentoring committee since 2014, supervising a formal mentoring program. For over ten years he has mentored undergraduate interns as part of the SCEC Undergraduate Summer Experience in Information Technology (UseIT), helping them to develop visualization software for seismic datasets and introducing a diverse group of students to HPC concepts.

Outside of work, Scott likes to play with his two-year-old son, curl, and knit.


Vice President and General Manager of the Technical Computing Initiative (TCI) in Intel’s Data Center Group

Trish Damkroger is Vice President and General Manager of the Technical Computing Initiative (TCI) in Intel’s Data Center Group. She leads Intel’s global Technical Computing business and is responsible for developing and executing Intel’s strategy, building customer relationships and defining a leading product portfolio for Technical Computing workloads, including emerging areas such as high performance analytics and artificial intelligence. This includes traditional HPC, workstations, processors and co-processors, and includes all aspects of solutions including industry leading compute, storage, network and software products. Ms. Damkroger has more than 27 years of technical and managerial roles both in the private sector and within the United States Department of Energy, most recently as the Associate Director of Computation (Acting) at Lawrence Livermore National Laboratory leading a 1,000 person group that is one of the world’s leading supercomputing and scientific experts. Since 2006, Ms. Damkroger has been a leader of the annual Supercomputing Conference series, the premier international meeting for high performance computing. She was the SC14 General Chair in New Orleans and has held many other committee positions. She was named one of HPCwire’s People to Watch in 2014. Ms. Damkroger has a master’s degree in electrical engineering from Stanford University.


Oakridge National Laboratory

Speaker and Session Chair: KELLY GAITHER

Director of Visualization & Senior Research Scientist

Kelly Gaither is the Director of Visualization, a Senior Research Scientist, and the interim Director of Education and Outreach at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. Dr. Gaither leads the visualization activities while conducting research in scientific visualization, visual analytics and augmented/virtual reality. She received her doctoral degree in Computational Engineering from Mississippi State University in May, 2000, and received her masters and bachelors degree in Computer Science from Texas A&M University in 1992 and 1988 respectively. Gaither has over thirty refereed publications in fields ranging from Computatational Mechanics to Supercomputing Applications to Scientific Visualization. She is currently a co-PI and the director of Community Engagement and Enrichment and for Extreme Science and Engineering Discovery Environment (XSEDE2), a $120M project funded by the National Science Foundation. She has given a number of invited talks and keynotes. Over the past ten years, she has actively participated in conferences related to her field, specifically acting as general chair IEEE Visualization 2004 and general chair of XSEDE16.


Chief Analyst and Senior Director of Market Intelligence, Intel Data Center Group

Debra is the Chief Analyst and Senior Director of Market Intelligence for Intel’s Data Center Group. Debra’s nearly 30 year career in High Performance Computing started at IDC where she spent 17 years leading the company’s presence in high end computing in government, academia and industry. She was instrumental in driving science and technology policy initiatives in the US and abroad, highlighting the impact and importance of HPC as a fundamental economic and innovation driver. She led IDG’s initiatives in healthcare and life sciences, launching Bio-IT World magazine and Expo, the Life Sciences Venture Fund and a research advisory and consultancy.

Following IDC, Debra was VP of Strategy for IBM’s Deep Computing organization, leading such initiatives such as IBM’s Blue Gene program and the early hosted HPC solutions of Deep Computing on Demand. She ran strategy and Market Intelligence for IBM’s System and Technology Group business unit driving critical BI and analytic programs and new modeling methodologies.

Debra was CEO of Tabor Communications, expanding the scope of the company beyond HPCWire, launching a Research organization, and building out a digital delivery platform. Following Tabor, Debra was at Microsoft leading strategy and evangelism, launching its Technical Computing Executive Advisory Council and driving several high profile collaborations with key partners including the Gates Foundation, UN, DoD, and NetHope.

Since joining Intel, Debra has led the company’s efforts on multiple fronts, including expanding the use and adoption of high performance computing into new markets and communities and driving strategy and pathfinding for Intel’s Technical Computing Group. In her current role as Chief Analyst and Sr. Director of MI, she is responsible for leading the vision and technical infrastructure around developing a world class MI organization for DCG.

She is actively involved in STEM policy initiatives and in working with global organizations to advance economic development through access and use of leading edge technologies. Outside of Intel Debra is active in the community sitting on several boards. She is a devoted mom and an avid skier. .

Speaker and Mentor Chair: ELSA GONSIOROWSKI

Lawrence Livermore National Laboratory

Elsa is an application I/O specialist and systems software developer within the Livermore Computing supercomputing center at Lawrence Livermore National Laboratory. She received a Ph.D. in Computer Science from Rensselaer Polytechnic Institute, Troy, NY. Her research interests include software tools for understanding application performance throughout the I/O stack, application checkpointing, and parallel discrete-event simulation.


Acting Lead of the User Engagement Group

Rebecca Hartman-Baker is the acting leader of the User Engagement Group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. She is a computational scientist with expertise in the development of scalable parallel algorithms for the petascale and beyond. Her career has taken her to Oak Ridge National Laboratory, where she worked on the R&D100-award winning team developing MADNESS and as a scientific computing liaison in the Oak Ridge leadership computing facility; the Pawsey Supercomputing Centre in Australia, where she coached two teams to the Student Cluster Competition at the annual Supercomputing conference and led the decision-making process for determining the architecture of the petascale supercomputer; and NERSC, where she is responsible for NERSC’s engagement with the user community to increase user productivity via advocacy, support, training, and the provisioning of usable computing environments. Rebecca earned a PhD in Computer Science, with a certificate in Computational Science and Engineering, from the University of Illinois at Urbana-Champaign.

Poster Chair and Speaker: MISBAH MUBARAK

Argonne National Laboratory

Misbah Mubarak is a postdoctoral researcher at Argonne National Laboratory. At Argonne, she is part of the data-intensive science group that is working to enable researchers make use of their big data in new scientific advances. . She is the recipient of U.S. Fulbright scholarship, ACM SIGSIM PADS PhD colloquium award and a finalist for Google Anita Borg scholarship. Misbah received her PhD and Masters in computer science from Rensselaer Polytechnic Institute (RPI) in 2015 and 2011 respectively. She also has experience working at CERN, Switzerland and Teradata Corporation.

Misbah has authored or co-authored over 25 papers in the area of performance modeling and analysis for High-Performance Computing. She has served as a peer reviewer for a number of journals including ACM Transactions on Modeling and Simulation, ACM Transactions on parallel computing and IEEE journal of computers. She has served as the technical program committee member of ACM SIGSIM PADS conference in 2016 and 2017, publicity chair for ACM SIGSIM PADS in 2017 and organizing committee member of the women in HPC workshop at Supercomputing (SC) 2016.


Kelly Nolan is a highly respected bilingual communications, marketing, and public affairs executive, with more than 12 years of experience in the education, health, economic development sectors including working closely with boards of directors, executives, government officials and elected officials, directing teams, and managing division and organizational budgets. Kelly has directed, developed and executed bold and innovative public relations, marketing, and communications strategies, including larger scale social media and internet marketing initiatives, membership and sponsorship programs for all knowledge economy sectors. She is an accomplished professional with an exceptional track record of building positive relationships with senior government officials, external partners, executives, media, clients, external agencies, and internal team members.


Research Faculty, Center for Education Integrating Science, Mathematics and Computing (CEISMC) at the Georgia Institute of Technology

Lorna Rivera serves as a Research Scientist II in Program Evaluation at the Georgia Tech Center for Education Integrating Science, Mathematics and Computing (CEISMC). Her work focuses on the intersection of scientific content, pedagogy, and equity with the goal of being both methodologically innovative and socially responsible. Rivera has conducted evaluations primarily funded by the National Science Foundation’s Division of Advanced Cyberinfrastructure. This has led her to work with over 18 universities as well as multiple international high performance computing centers and organizations such as Compute Canada, EPCC, NCSA, PRACE, RIKEN, and XSEDE. Rivera received both her Bachelor of Science in Health Education and her Master of Science in Health Education and Behavior from the University of Florida. Prior to joining Georgia Tech, Rivera worked with various institutions, including the University of Illinois at Urbana-Champaign, March of Dimes, Shands HealthCare, and the University of Florida College of Medicine. Her research interests include the evaluation of innovative programs and their sustainability.


The WHPC workshop at ISC19 would not be possible without a dedicated team of volunteers.

Steering and Organisation Committee Members

  • CHAIR: Toni Collis, EPCC, UK and Women in HPC Network, UK
  • Julia Andrys, Murdoch University, Western Australia
  • Sunita Chandrasekaran, University of Delaware, USA
  • Trish Damkroger, Lawrence Livermore National Laboratory, USA
  • Rebecca Hartman-Baker, NERSC, USA
  • Daniel Holmes, EPCC, UK
  • Adrian Jackson, EPCC, UK
  • Alison Kennedy, STFC Hartree Centre, UK
  • Kimberly McMahon, McMahon Consulting, USA
  • Misbah Mubarak, Argonne National Laboratory, USA
  • Lorna Rivera, University of Illinois, USA
  • Lorna Smith, EPCC, UK
  • Jesmin Jahan Tithi, Stony Brook University, USA

Programme Committee

  • Julia Andrys, Murdoch University, Western Australia
  • Toni Collis, EPCC, UK
  • Sunita Chandrasekaran, University of Delaware, USA
  • Rebecca Hartman-Baker, NERSC, USA
  • Daniel Holmes, EPCC, UK
  • Adrian Jackson, EPCC, UK
  • Alison Kennedy, STFC Hartree Centre, UK
  • Dounia Khaldi, Stony Brook University, USA
  • Misbah Mubarak, Argonne National Laboratory, USA
  • Lorna Rivera, University of Illinois, USA
  • Lorna Smith, EPCC, UK
  • Jesmin Jahan Tithi, Intel, USA


Women in HPC provides mentors for our early career poster presenters. Our dedicated volunteers include:

Call for Poster: Closed

Call for virtual posters: extended to 27th August 2017 (anywhere on Earth)

As part of the workshop we will be inviting submissions from women in industry and academia to present their work as a virtual poster in a supportive environment that promotes the engagement of women in HPC research and applications, providing opportunities for peer to peer networking and the opportunity to interact with female role models and employers.

Submissions are invited on all topics relating to HPC from users and developers. All abstracts should emphasise the computational aspects of the work, such as the facilities used, the challenges that HPC can help address and any remaining challenges etc.

As a poster author you will have the opportunity to share your work with the workshop audience in a brief ‘elevator pitch’ talk. This is followed by a networking session where attendees will have the opportunity to view your poster and discuss your work.

Instructions for authors

Submission is done through the easychair submission system.

Please use this WHPC-workshop-submission-template-2017 to prepare the following as a single PDF to be uploaded as for the submission:

  • Title of your talk/virtual poster
  • Full name of main (presenting) author and affiliation
  • Names of any other authors and affiliations;
  • Abstract (up to 250 words)
  • Objectives, impact and accomplishments of the work to be presented
  • Short biography of main (presenting) author (150 words)
  • Photograph of the main (presenting) author

If you have questions please contact info@womeninhpc.org.

Poster Presenters

Dana Akhmetova, KTH Royal Institute of Technology

Investigating and improving the scalability of task-based programming models

As the classical technology scaling ended (processor clock rates are not growing anymore), performance increase has to be gained from explicit parallelism. The modern computing is cheap and massively parallel, while energy and performance costs are impacted by data movement: a chip of a flagship supercomputer is expected to have soon thousands of cores on it; the energy cost for computation is decreasing much faster than the energy cost for moving data on a chip, thus the latter is becoming a top priority in order to have computing efficiency. Current HPC applications have to be ready for the Exascale era, and if current parallel programming models do not totally support constructs that describe data locality and affinity, but future Exascale programming models have to.

This work focuses on investigating data-locality sensitivity of different scientific programs written in three task-based programming models – Cilk, OpenMP, and Qthreads. The concept of data locality implies the probability of a memory reference being “local” to prior memory accesses. Among parallel programming models for shared-memory machines, the use of task-based programming models is becoming more and more common. Task-based approach refers to designing a program in terms of “tasks” – a logically discrete section of work to be done.

The goals of this work were to port these task-based codes into locality-oblivious and locality-aware task-based programming models, to compare performance between such models and to analyze task- based parallelism and data-locality sensitivity features. This study is based on the measurements from different hardware performance counters during the local and remote memory accesses within one shared-memory node.

Poster Download


Author Biography

Dana Akhmetova gained her first degree at the Department of Computer Science at Lomonosov Moscow State University in 2007. She started postgraduate studies at the PDC supercomputing center in Stockholm and is now a fourth year Ph.D. student at the Department of Computational Science and Technology at the KTH Royal Institute of Technology under the supervision of Professor Örjan Ekeberg. During 2015 Dana was a Ph.D. intern at the Pacific Northwest National Laboratory in the United States, where she was focusing on task-based programming models and analysing the connection between the granularity of application tasks and the scheduling overhead in many-core shared-memory systems. Currently Dana is working on two large Exascale EC-funded projects called INTERTWinE and AllScale. The INTERTWinE project addresses the problem of programming model design and implementation for Exascale computing. The AllScale project is proposing an environment for the effective development of highly scalable, resilient and performance-portable parallel applications for future Exascale systems.

Gladys K. Andino, Purdue University

Promoting Diversity in HPC at Purdue University

This abstract highlights Purdue University’s Res earch Computing broader engagement initiative to promote and advance diversity in High Performance Computing (HPC) with emphasis on gender as well as the variety of skills, experience and culture of the women on our team. We will describe the expertise that our female team members provide to Purdue’s HPC users as well as present an update of our “Women in HPC” initiative. Our work is highlighted below:

  1. Increasing Workforce Diversity: Purdue University recognizes that supporting increased diversity in a specialty field like HPC will improve the candidate pool for HPC centers and is committed to advancing the representation of women in HPC. Since 2011 Research Computing has doubled the number of women hired to the team which includes 11 females.
  2. Promote Participation in HPC: An initiative to promote diversity in HPC was developed in the Fall of 2016 in the form of the Women in HPC (WHPC) networking group. Currently, we have 112 registered undergraduate, graduate, staff and faculty members. The WHPC program goal is to encourage women to pursue research and careers in HPC and technology fields. Activities include regular meetings presenting technical HPC-related topics, the promotion of technical conferences (e.g. PEARC, Grace Hopper and SC) by providing partial funding for women to attend, and the development of a mentorship program to support the growth of a vibrant HPC community and broaden access to HPC for women in sciences.

Poster Download

Author Biography

Gladys is currently a Senior Scientific Applications Analyst in Research Computing at Purdue University. While providing computing expertise to students, staff and faculty with her advanced knowledge of bioinformatics software, analysis, and pipelines, she also provides instruction related to Unix and HPC – both in standalone workshops and as a lecturer in academic courses. With other female members of the Research Computing team, she founded the first Purdue’s Women in HPC program. Gladys actively participates with student-mentor programs at Purdue as well as in academic and professional
conferences. In addition, she serves as editor for a peer- reviewed journal in her research field.
An entomologist by training, her doctoral work integrated computational bioinformatics and high-performance computing techniques from the ground up. With a passion for both learning and teaching, Gladys helps guide the next generation of researchers as a computational life sciences specialist for Purdue’s Research Computing.

Marta Čudová, Brno University of Technology

Framework for Planning, Executing and Monitoring Cooperating Computations

Computational simulations are considered to be the third pillar of science. To study complex phenomena such as forest fires, weather forecast or fluid dynamics, we need to precisely describe all the processes and communication under the hood.

Computational simulations are usually very demanding on computational performance and storage. According to the computational requirements of each process, the whole computation may be distributed over diverse computational facilities.

Big supercomputing centres offer both sufficient amount of computational power and disk space, however, the computing infrastructures are growing in parallelism and becoming more diverse. This heads towards very sophisticated computational techniques to take the full advantage of the machine power. Furthermore, advanced knowledge of supercomputer’s architecture and submission systems is required by users. Such a big human effort can become a bottleneck because a non-negligible number of person hours has to be invested daily, especially, if the user is not an IT specialist. My research focuses on the development of the dispatch server as a tool providing an automated planning, executing and monitoring cooperating computations. The dispatch server is inspired by the HPC as a service paradigm. Its modular design enables extensibility and unifies the access to different HPC systems through a simple client-server interface and standard web services. The dispatch server also detects the possibility of concurrent execution and offers a level of fault tolerance.

Poster download

Author Biography

Marta Čudová is a PhD student at the Faculty of Information Technology, Brno University of Technology. She received her MSc in Computer Science from the Brno University of Technology in 2016. In 2016, she attended PRACE Summer of HPC and spent two months at Edinburgh Parallel Computing Centre in the UK where she worked under Dr. Neelofer Banglawala. She also received the HPC Ambassador Award within PRACE Summer of HPC. Now, she is a member of the Supercomputing Technologies Research Group where she focuses on cluster management systems and multiphysics model coupling. This group closely collaborates with the Biomedical Ultrasound Group at University College London.

Jieyu Gao, Purdue University

HPC Job Performance at Purdue University

This poster will present the current state of ormajob and system metrics from analysis of jobs running on the high-performance resources at Purdue University.

Because of the increasing demand of research computing resources, it is important for IT staff and users to understand their jobs’ performance characteristics. The goal of this work is to analyze the data generated by TACC Stats and Open XDMoD, and apply that data to guide decision-making around Purdue’s HPC systems.

TACC Stats and Open XDMoD are tools to provide metrics about the jobs running on the HPC resources. TACC Stats provides job- level system data and Open XDMoD presents a summary of the system’s usage statistics, as well as easy-to-use interfaces to further break down that data. Currently, TACC Stats at Purdue University records various statistics, including Memory Bandwidth, Memory Usage and CPU User Fraction on job-level basis. Using Jupyter Notebooks and python scripts, we have developed tools to analyze system usage and job-level data, and use that data to make data-driven decisions around system design and procurement.

Poster download

Author Biography

Jieyu Gao joined the Emerging IT Leaders program upon her graduation from Purdue’s Applied Statistics and Economics (Honor) program in 2016. She works with research computing users to solve data analysis problems. She is responsible for doing data analysis using Tacc Stats data and installing Open XDMoD at Purdue University.

Jieyu is one of the Gender-Diversity Award winners at Internet2 Global Summit 2017.

Violeta Holmes, University of Huddersfield

Addressing the skills shortages in HPC: reporting on experience of running college and university courses using sustainable HPC resources

HPC technologies and the national e-infrastructure are vital for advancements in science, business and industry. However, there is a shortage of HPC skilled staff: HPC architects, administrators, researchers and research software engineers. Currently, HPC skills are acquired mainly by students and staff taking part in HPC-related research projects.

To address the issue of skills shortages in the HPC it is essential to provide teaching and training as part of both postgraduate and undergraduate courses, and to engage with young children and college level students through inspirational outreach events.

Higher and Further Education (H/FE) institutions have a fundamental role in the development of all the people who participate in the national e-infrastructure. The design and development of such courses is challenging since the technologies and software in the fields of large scale distributed systems such as Cluster, Cloud and Grid computing are undergoing continuous change.

Current solutions from large universities and well-funded research organizations are not easily applied to Higher and Further Education. In this presentation we report on 10 years’ experience in the development of HPC related courses, and providing affordable and sustainable resource solutions for teaching and research in small to medium-size universities. We utilize already available resources at the institutions and open-source solutions whenever possible. Using COTS hardware and free open-source software in teaching HPC related topics demonstrates that H/FE institutions do not require expensive national and international supercomputer resources to deliver HPC teaching and training.

Poster download

Author Biography

Dr Violeta Holmes leads the High Performance Computing (HPC) Research Group at the University of Huddersfield. Her research interests and expertise are in the areas of HPC Systems Infrastructure, Energy Efficient Computing, Cloud Computing, Big Data, and Internet of Things. Her career in Higher Education as a researcher and lecturer in computing and engineering spans over 25 years.

In her role as ARCHER champion she supports activities to broaden the UK HPC user base to new disciplines and communities, and promotes the links between HPC users, developers and researchers across various research groups and institutes at the University of Huddersfield.

Dr. Holmes worked with 3M Buckley Innovation Centre at the University of Huddersfield in deploying HPC research and development facilities for SMEs and industry, and was HPC academic lead in the Innovate UK (TSB) funded Energy Efficient Computing project at the University of Huddersfield.

As a Chartered Engineer and the IET and BCS member, she supports the advancement and promotion of the careers of women in Science, Technology, Engineering and Mathematics (STEM), Higher Education and research.

Amanda Howard, Brown University

Implementation of a meshless MLS scheme for simulations of suspension flows

This poster will focus on a meshfree method for simulations of neutrally buoyant, non-Brownian particles in Stokes flow. We will demonstrate a meshless scheme using Generalized Moving Least Squares (GMLS) polynomial reconstructions to provide a computationally efficient method with higher order accuracy for use with general boundary conditions and arbitrary polynomial shapes while maintaining stability. The emphasis will be on applications to dense suspensions of colloids, especially colloids with polydispersed sizes and non-spherical shapes. GMLS for Stokes flow has been implemented in serial in two dimensions for several colloids, however the computational demands in three dimensions require high performance computing and efficient preconditioners to handle more than one colloid. This work is in collaboration with Sandia National Laboratories, and is implemented using the Trilinos scientific computing packages Tpetra, Kokkos, and Belos to allow large-scale simulations.

Poster download

Author Biography

Amanda Howard is a Ph.D. candidate at Brown University in the Division of Applied Mathematics. Her research focuses on scientific computing and numerical methods in computational fluid dynamics, particularly applied to suspension flows of non-Brownian particles, as well as efficient implementation of higher order meshless methods. She is a recipient of a 2014 National Science Foundation Graduate Research Fellowship. At Brown, she founded the Brown University student chapter of the Association for Women in Mathematics and runs the Applied Mathematics undergraduate-graduate mentoring program.

Gokcen Kestor, Oak Ridge National Laboratory

Localized Fault Recovery for Nested Fork-Join Programs

Nested fork-join programs scheduled using work stealing can automatically balance load and adapt to changes in the execution environment. In this work, we design an approach to efficiently recover from faults encountered by these programs. Specifically, we focus on localized recovery of the task space in the presence of fail-stop failures. We present an approach to efficiently track, under work stealing, the relationships between the work executed by various threads. This information is used to identify and schedule the tasks to be re executed without interfering with normal task execution. The algorithm precisely computes the work lost, incurs minimal re- execution overhead, and can recover from an arbitrary number of failures. Experimental evaluation demonstrates low overheads in the absence of failures, recovery overheads on the same order as the lost work, and much lower recovery costs than alternative strategies.

Poster download

Author Biography

Dr. Gokcen Kestor is a research scientist at the Oak Ridge National Laboratory (ORNL) in the The Computer Science Research group. Gokcen earned her Ph.D. in Computer Science from the Polytechnic University of Catalonia (UPC) in 2013, Barcelona. Her dissertation investigated effective software transactional memory solutions.

Her research interests include resilience for future large scale systems, parallel programming models and runtimes, especially task based programming models, compilers, power and performance analysis and modeling of HPC applications and emerging technologies, investigations into effective use of emerging memory technologies and machine learning techniques in the context of HPC.

She is currently working on fault tolerance solutions for distributed task-programming models, configurable soft error detection techniques, and evaluation of emerging memory technologies.
She is a member of the ACM and IEEE Computer Society.

Camdyn Leidel, Faubion Middle School

ElementaryPi : Elementary Kids Learning How To Make Raspberry Pi Cluster

Kids aren’t instructed in learning computer science and this project will change that. I have created steps for elementary kids to make themselves a cluster. In this project. I have made interface for parallel visualization with Visit. At the end, I have made a step by step website for kids to learn about parallel computing that includes software tools to complete the project.

Author Biography

My name is Camdyn Leidel and I am eleven years old. I play Softball and participate in Girl scouts. I got into computer science after my father became entrepreneur. I started asking questions about computer science, and he showed me the Woman HPC group. Afterwards, I started working on this project and traveled to Germany for ISC. I met a lot of people and now I’m ready for SC.

Marla Meehl

Women in IT Networking at SC (WINS)

Women in IT Networking at SC (WINS), an NSF funded program, is a collaboration between the University Corporation for Atmospheric Research (UCAR), the Department of Energy’s Energy Sciences Network (ESnet) and the Keystone Initiative for Network Based Education and Research (KINBER) that fosters gender diversity in the research and education (R&E) community’s network and computer systems engineer occupations. The Supercomputing Conference’s SCinet organization offers a unique opportunity for intensive hands-on training and workforce development in networking, security, measurement, computer systems and research support. Although some small number of women have been members of SCinet since its earliest days, WINS was launched to increase the diversity of the SCinet volunteer staff and provide professional development opportunities to highly qualified women in the field of networking and computing and provide community building for women who pursue careers in networking and Cyberinfrastructure (CI). Experience shows that SCinet participants also grow in their ability to build collaborative relationships and develop long-term professional relationships, which open the door to future career opportunities. The primary goal of the project is to give U.S. women professionals in their early to mid-career the opportunity to build expertise and business connections, while at the same time fostering diversity in the overall network and computer systems engineering workforce. Another valuable outcome is raising awareness of the lack of diversity in IT and the value of diverse activities.

Poster download

Author Biography

Marla Meehl is Section Head of the Network Engineering and Telecommunications Section (NETS) at the University Corporation for Atmospheric Research (UCAR), where she routinely manages a budget of $4M, a staff of 25, and multiple large-scale networking projects. She has 20 years of experience managing large network projects. For the past 18 years, she has managed large external networking projects and activities for UCAR, including the Front Range GigaPoP (FRGP), Bi-State Optical Network (BiSON), and the Boulder Point of Presence (BPoP). Meehl has a strong relationship with the regional research and education community, including the Western Regional Network (WRN), and has been leading the efforts of Westnet for over 15 years. Meehl is also the Principal Investigator (PI) of the National Science Foundation-funded “Women in IT Networking at SC” (WINS) project. She is also the Co-Chair of the Internet2 Gender Diversity Initiative.

Ayat Mohammed, Texas Advanced Computing Center

Best Practices for Scalable Visualization

Our study attempts to balance the performance of machine learning and scientific visualizations using In-Situ visualization. We used ParaView Catalyst and OspRay (ray tracing based renderer) to perform the parallel processing and the high- fidelity visualization of large-scale simulations such as Direct Numerical Simulation (DNS) turbulent flow simulation.

We had 100 frames of (“coarse”) data with 3 velocity components and pressure are stored in the database.

The main goal of our study was to classify the vorticity of the flow and represent different clusters in a 3D visualization.

We generated the vorticity vector, which is the curl of the velocity then applied KMeans clustering algorithm to classify the data set into 3 clusters. OspRay was enabled in ParaView to achieve high-finality rendering. After designing the visualization pipeline we used ParaView Catalyst to generate the script that can be used later for the in-situ visualization. Visualization pipeline design and the script generation were carried out on TACC’s Stampede2 using one node with 64 processors (Intel Xeon Phi 7250). Through an interactive ParaView session using VNC server, the data set was loaded and rendered remotely. We decided to create streamlines for vorticity and represent three different classes (low, medium, and high) of its magnitude. Pressure values were represented by a gradient of a single color of the two slices. The vorticity was represented by a categorical (discrete) color map of three Hue values to show the classifications of vorticity values. Moreover, the vorticity color map was designed to be a colorblind safe map.

Poster download

Author Biography

I’m a postdoctoral fellow at TACC’s Scalable Visualization Technologies. Prior to TACC, I worked with the high-performance visualization group in Advanced Research Computing at Virginia Tech. I earned my Ph.D. in Computer Science from Virginia Tech, and I’m a faculty member in the Scientific Computing department at Ain Shams University in Egypt.

Gianina Alina Negoita, Iowa State University

HPC-Bench: A Tool to Optimize HPC Benchmarking Workflow

HPC-Bench is a general purpose tool to optimize HPC benchmarking workflow to aid in the efficient evaluation of performance using multiple applications with only a “click of a button” on an HPC machine. HPC-Bench allows multiple applications written in different languages, multiple parallel versions, multiple numbers of processes/threads to be evaluated with a single “click of a button”. Performance results are put into a database, which is then queried for the desired performance data, and then the R statistical software package is used to generate the desired graphs and tables. The use of HPC-Bench is illustrated with complex applications that were run on the National Energy Research Scientific Computing Center’s (NERSC) Edison Cray XC30 HPC computer.

Poster download

Author Biography

Gianina Alina Negoita received the B.S. degree in physics and the M.S. degree in atomic physics and astrophysics from the Department of Physics at the University of Bucharest in Romania, in 2002 and 2004, respectively. From 2002 to 2005 she worked as a scientific researcher in the field of nuclear astrophysics at Horia Hulubei National Institute of Physics and Nuclear Engineering in Bucharest-Magurele, Romania studying the nuclear effects on neutrino emissivities from neutron stars. Gianina Alina Negoita also obtained an M.S. and a Ph.D. in nuclear physics from Iowa State University, in 2008 and 2010, respectively. She worked on ab-initio many-body calculations for a set of nuclei using the realistic nucleon-nucleon interaction, JISP16. Gianina Alina Negoita is currently working toward the Ph.D. degree in the Computer Science Department at Iowa State University. She obtained an M.S. in Computer Science from Iowa State University in Fall 2013. Her research interest is in the field of Parallel and High Performance Computing, doing performance evaluation for SHMEM (Shared Memory Routines) and MPI (Message Passing Interface) libraries, as well as machine learning and nuclear physics applications.

Songhui Ryu, Purdue University

Comparison of Machine Learning Algorithms and Its Ensembles on Botnet Detection

A Botnet is a network of compromised devices that is controlled by malicious ‘botmaster’ in order to perform various tasks, such as executing DoS attack, sending SPAM and obtaining personal data etc. As botmasters generate network traffic while communicating with their bots, analyzing network traffic to detect Botnet traffic can be a promising feature of Intrusion Detection System(IDS). Although IDS has been applying various machine learning (ML) techniques, comparison of ML algorithms including their ensembles on Botnet detection has not been figured out yet. In this study, not only the three most popular classification ML algorithms – Naïve Bayes, Decision tree, and Neural network are evaluated, but also the ensemble methods known to strengthen ML algorithms are tested to see if they indeed provide enhanced predictions on Botnet detection. This evaluation is conducted with CTU- 13 public dataset on Spark, measuring running time of each ML and its f measure and MCC score.

Poster download

Author Biography

Songhui Ryu is a Master’s student with particular interests in big data analysis and machine learning. Working as a research assistant at research computing group at Purdue (RCAC), she has been working on system log analysis, network traffic analysis combined with machine learning technologies on clusters. She is pursuing a M.S. in Computer and Information Technology at Purdue University and holds a B.S. in Computer Science from Sungkyunkwan University, S. Korea.

Catherine D. Schuman, Oak Ridge National Laboratory

High Performance Computing for Spiking Neuromorphic Network Training

Neuromorphic computing is a field in which neural networks are implemented in hardware to achieve intelligent computation with lower power and on a smaller footprint than traditional von Neumann architectures. One challenge for spiking neuromorphic computers, which implement spiking neural networks in hardware, is how to best train them. In this work, we utilize Oak Ridge National Laboratory’s Titan supercomputer to construct a spiking neuromorphic network for a particular neuromorphic hardware system (Dynamic Adaptive Neural Network Array or DANNA) to solve an obstacle avoidance, environment exploring robotic navigation task. We utilized a genetic algorithm for training our DANNA networks, in which each node on Titan either operated as a master or slave. Slave processes trained a subpopulation of networks and intermittently communicated their best result to one of the master processes, which communicated its best result to an overall “super-­master” process. We trained networks for this task on 18,000 nodes of Titan for 24 hours. The resulting network is the best network produced to date for all of our training approaches for the robotic navigation task. The network was trained in a simulation environment with obstacles and walls, but the resulting network was deployed on a physical robot in real environment. Utilizing Titan, we have successfully demonstrated that a network can be customized for a particular hardware platform, and that utilizing HPC resources can not only produce good networks faster, but it can also produce better networks than running on a single machine for weeks or months.

Poster download

Author Biography

Catherine D. Schuman (Katie) is a Liane Russell Early Career Fellow in the Computational Data Analytics group at Oak Ridge National Laboratory. Katie received her doctorate in computer science in 2015 from the University of Tennessee, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. She is continuing her study of models and algorithms for neuromorphic computing as part of her fellowship at ORNL. Katie has co-­‐‑authored 20 publications in neuromorphic computing, and presented her work at fourteen conferences and workshops. Katie is also a joint faculty member at the University of Tennessee (UT), where she, along with four professors at UT, leads the TENNLab neuromorphic research team.

Juliette Ugirumurera, Lawrence Berkeley National Laboratory

High Performance Computing in Large-Scale Traffic Engineering Problems

An important problem in transportation engineering is the traffic assignment problem (TAP), which seeks to determine traffic flows on road networks that satisfy some equilibrium conditions. Since solving a large-scale TAP using an optimization algorithm is often slow, High performance computing (HPC) provides the memory and power to speed up the computation, thus allowing more detailed analysis.

We studied the Frank-Wolfe algorithm (FW) commonly used to solve the static TAP, which assumes constant demands rates between origin-destination (O-D) pairs. The FW is an iterative descent method in which each iteration finds a search direction going toward a smaller objective function value at a best step length. Finding the search direction involves determining shortest-paths between all O-D pairs based on travel costs at the current iteration. This computation accounts for more than 95% of the overall execution time. Though there are some works that study parallel shortest-path algorithms, to the best of our knowledge, our work is the first to incorporate a parallel shortest- path algorithm into the FW applied to TAP.

We implemented the parallel FW on the Edison supercomputer at NERSC (nersc.gov). Our initial parallelization duplicated the network on 5 compute nodes, and equally divided the O-D pairs among 120 cores (24 cores per node). The 120 cores computed the shortest-paths for their assigned O-D pairs simultaneously. We tested this algorithm using the Chicago network, which had 12,982 nodes 39,018 links and 1,360,427 O-D pairs. The computation time was reduced by a factor of 25 compared to the sequential FW.

Poster download

Author Biography

Juliette Ugirumurera is a postdoctoral research fellow in Scalable Solvers Group of the Computational Research Division at the Lawrence Berkeley National Laboratory. She is developing parallel algorithms to solve large-scale traffic engineering problems. Juliette completed her PhD in Computer Science at the University of Texas at Dallas in 2016, where she modeled and designed algorithms to solve resource scheduling problems in Microgrid power systems. Her research interests include optimization, resource scheduling in complex networks, algorithm design, and Internet of things.

Mariam Umar, Virginia Tech

An Automatic Hardware Software Co-design for Exploring Sensitivity Analysis of Applications

Hardware-software co-design is increasingly important as we approach the Exascale era, particularly because of the tremendous increase in complexity, scale, and performance. Numerous solutions have been proposed to maximize the performance of the application using a combination of hardware and software optimization techniques, e.g., memory throttling and DVFS. However, we argue that none of these approaches are generic and effective unless we understand the impact of the hardware on an application’s expected performance combined with the impact of software optimizations. We explore this problem by analyzing the sensitivity of application performance to changes in hardware configurations. We also present analysis of how to improve application performance by changing hardware configuration, hence our work serves as a guideline for reconfigurable hardware. We base our analysis on automated hardware software co-design using the Aspen domain specific language. We believe that automated discovery of hardware and software characteristics can help modelers keep pace with architectural changes and the ever-increasing demand for performance with lower energy and resource consumption. We further use this automatic discovery of hardware and software characteristics and parameters to understand the impact they have on each other, and how we can use application behavior analysis to improve performance. We tested our approach on three diverse proxy applications — CoMD, Matrix Multiply and Jacobi— running on CPU-GPU based heterogeneous architecture. In future, we plan to explore our approach on other heterogeneous architectures including disaggregated systems.

Poster download

Author Biography

Mariam is a final-year Ph.D. student at the department of Computer Science, Virginia Tech. She is planning to graduate in early-Spring, 2018. Her research interest lies in exploring and implementing energy and performance models and methodologies for current and future Exascale architectures. She encompasses analytical, empirical and machine learning modeling techniques in conjunction with Aspen domain specific language for understanding the impact of performance and energy modeling on hardware software co-design techniques for current and future architectures. She explores the impact of improving performance using software as well as hardware configuration at runtime and beforehand. She also has experience in developing digital-signal-processing methods for embedded systems and models for routing and channel optimization for wireless networks. In future, she plans to investigate power- aware architectures for exascale systems and be involved with efforts that meet the goals set by DOE for power consumption by for HPC.

Deepthi Vaidhynathan, National Renewable Energy Laboratory (NREL)

ACES-Cosim: A Framework to Simulate Advanced Electric Distribution Systems at Scale for Controller Architecture Research

Distributed Energy Resources (DERs), such as Photovoltaics (PV) and energy storage systems, have the potential to reduce the cost of electricity and the environmental impact of producing electricity. But DERs must be connected to the grid such that reliability is not impacted. High-fidelity simulation of the distribution power system, interacting DER’s, their controllers, control architectures and interactions with real hardware are required for a full understanding of these complex systems.The Advanced Computational Energy Systems (ACES)-Cosim, an agent-based modeling and co-simulation framework is targeted at addressing these problems.

ACES-Cosim hosts the connection and evolution in time of controllers, high-fidelity thermal and power models, power system simulators, and hardware using a powerful discrete event simulator paradigm. In pursuing high fidelity simulation and utilization of the ACES-Cosim framework to investigate real-
time distributed control architecture, the following challenges needed to be addressed: Interactive real-time visualization of the system running the simulation to monitor and debug controllers; Asynchronous execution of supervisory controllers and equipment models to make use of multiple cores on a High performance Computing (HPC) system node; Multi-node parallelism for large numbers of controllers and equipment to overcome the memory and processing bottlenecks of a single node; Multi-language support for controller and equipment models in Matlab, Python etc. to enable wider use of the framework by domain experts.

Remaining challenges include, augmenting the existing framework to study the control architectures under varying communication network conditions, and additionally scaling the framework to simulate thousands of controllers and devices.

Poster download

Author Biography

Deepthi Vaidhynathan received her M.S. degree in electrical engineering from the University of Colorado at Boulder in 2015. She works in the Computational science Center at NREL in the Complex system Simulation and optimization group. Her research interests include modeling and simulating for the transmission and distribution grid, software architecture for energy system integration research, and parallel performance engineering.

All SC 2017 Events


Sun 12, Nov, 2017 - Thu 16, Nov, 2017
All Day
Colorado Convention Center
700 14th Street
Denver | United States
Sun 12, Nov, 2017
9:00 am - 5:30 pm
Colorado Convention Center
700 14th Street
Denver | United States


Tue 14, Nov, 2017
12:00 pm
Colorado Convention Center
700 14th Street
Denver | United States
Tue 14, Nov, 2017
6:30 pm - 8:15 pm
The Corner Office Restaurant + Martini Bar
1401 Curtis Street
Denver | United States


Wed 15, Nov, 2017
12:15 pm - 1:15 pm
Colorado Convention Center
700 14th Street
Denver | United States


Thu 16, Nov, 2017
12:15 pm - 1:15 pm
Colorado Convention Center
700 14th Street
Denver | United States
Loading Map....