Thu 22, Jun, 2017
9:00 am - 1:00 pm
Basalt Conference Room, Marriott Hotel, Frankfurt
Hamburger Allee 2
Frankfurt, , 60486


Diversifying the HPC Community

The sixth international Women in HPC workshop will be held at ISC17, Frankfurt, Germany. This workshop aims to provide leaders, managers, and individual contributors in the HPC community with methods to improve diversity while providing early career women an opportunity to develop their professional skills. Following the overwhelming success of the WHPC workshop’s in 2016 we will once again discuss methods and steps that can be taken to diversify the workforce by discussing the following topics:

  • How to successfully identify and address real and perceived discrimination in the workplace.
  • Current research and open discussion on the roadblocks facing those in underrepresented groups that we may be overlooking.
  • The benefits of mentoring and professional networks and how to use these networks to learn and apply learnings to your career.

We will also offer opportunities aimed at promoting and providing women with the skills to thrive in HPC including:

  • Poster session including ‘lightning’ talks by women working in HPC
  • Speed mentoring
  • Handling conflicts at workplace and how to respond to discrimination wisely
  • Short talks on hints and tips for public speaking, how to take the next step in your career, effective workplace communication.


Workshop Speakers, Panelists and Chairs

Chair, Mentor & Speaker: TONI COLLIS

Co-Founder Women in HPC, EPCC at the University of Edinburgh

Toni Collis is an Applications Consultant in HPC Research and Industry, providing consultancy and project management on a range of academic and commercial projects at EPCC, the University of Edinburgh Supercomputing Centre.

Toni has a wide ranging interest in the use of HPC to improve the productivity of scientific research, in particular developing new HPC tools, improving the scalability of software and introducing new programming models to existing software. Toni is also a consultant for the Software Sustainability Institute and a member of the ARCHER team, providing ARCHER users with support and resources for using the UK national supercomputing service as effectively as possible. In 2013 Toni co-founded Women in HPC as part of her work with ARCHER. Women in HPC has now become an internationally recognised initiative, addressing the under-representation of women working in high performance computing.

Toni is Inclusivity Chair and a member of the Executive committee for the SC17 conference. Toni is also a member of the XSEDE Advisory Board and has contributed to the organisation and program of a number of conferences and workshops over the last five years.


President of Data Vortex Technologies

Carolyn Coke Reed Devany is the President of Data Vortex Technologies, an Austin, Texas-based HPC company featuring a proprietary network solution for HPC, Big Data Graph Analytics and Neuromorphic Computing. Carolyn works directly with federal and academic HPC customers, the tech investor community, and manages the company’s industry partnerships. During her 20 year tenure in the HPC community, she has noticed, and grown concerned by, the scarcity of women at the senior leadership level. Carolyn has been a member of the Women in HPC Advisory Council since SC15 and is passionate about addressing the lack of diversity in the broad HPC community.


Vice President and General Manager of the Technical Computing Initiative (TCI) in Intel’s Data Center Group

Trish Damkroger is Vice President and General Manager of the Technical Computing Initiative (TCI) in Intel’s Data Center Group. She leads Intel’s global Technical Computing business and is responsible for developing and executing Intel’s strategy, building customer relationships and defining a leading product portfolio for Technical Computing workloads, including emerging areas such as high performance analytics and artificial intelligence. This includes traditional HPC, workstations, processors and co-processors, and includes all aspects of solutions including industry leading compute, storage, network and software products. Ms. Damkroger has more than 27 years of technical and managerial roles both in the private sector and within the United States Department of Energy, most recently as the Associate Director of Computation (Acting) at Lawrence Livermore National Laboratory leading a 1,000 person group that is one of the world’s leading supercomputing and scientific experts. Since 2006, Ms. Damkroger has been a leader of the annual Supercomputing Conference series, the premier international meeting for high performance computing. She was the SC14 General Chair in New Orleans and has held many other committee positions. She was named one of HPCwire’s People to Watch in 2014. Ms. Damkroger has a master’s degree in electrical engineering from Stanford University.


Director of Visualization & Senior Research Scientist

Kelly Gaither is the Director of Visualization, a Senior Research Scientist, and the interim Director of Education and Outreach at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. Dr. Gaither leads the visualization activities while conducting research in scientific visualization, visual analytics and augmented/virtual reality. She received her doctoral degree in Computational Engineering from Mississippi State University in May, 2000, and received her masters and bachelors degree in Computer Science from Texas A&M University in 1992 and 1988 respectively. Gaither has over thirty refereed publications in fields ranging from Computatational Mechanics to Supercomputing Applications to Scientific Visualization. She is currently a co-PI and the director of Community Engagement and Enrichment and for Extreme Science and Engineering Discovery Environment (XSEDE2), a $120M project funded by the National Science Foundation. She has given a number of invited talks and keynotes. Over the past ten years, she has actively participated in conferences related to her field, specifically acting as general chair IEEE Visualization 2004 and general chair of XSEDE16.

Panelist & Mentoring Chair: REBECCA HARTMAN-BAKER

Acting Lead of the User Engagement Group

Rebecca Hartman-Baker is the acting leader of the User Engagement Group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. She is a computational scientist with expertise in the development of scalable parallel algorithms for the petascale and beyond. Her career has taken her to Oak Ridge National Laboratory, where she worked on the R&D100-award winning team developing MADNESS and as a scientific computing liaison in the Oak Ridge leadership computing facility; the Pawsey Supercomputing Centre in Australia, where she coached two teams to the Student Cluster Competition at the annual Supercomputing conference and led the decision-making process for determining the architecture of the petascale supercomputer; and NERSC, where she is responsible for NERSC’s engagement with the user community to increase user productivity via advocacy, support, training, and the provisioning of usable computing environments. Rebecca earned a PhD in Computer Science, with a certificate in Computational Science and Engineering, from the University of Illinois at Urbana-Champaign.


Research Architect

Adrian obtained a degree in Computer Science from The University of Edinburgh before going on to become one of the students in the first year of EPCC’s fledgling MSc in HPC. After completing the MSc he joined EPCC as an Application Consultant and has been here ever since.


Director, Hartree Centre, UK

AlisonAlison Kennedy is the Director of the STFC Hartree Centre, having joined in March 2016.

The Hartree Centre provides collaborative research, innovation and development services that accelerate the application of HPC, data science, analytics and cognitive techniques, working with both businesses and research partners to gain competitive advantage. Prior to joining Hartree, she worked in a variety of managerial and technical HPC roles at EPCC for more than 23 years.

Session & Panel Chair: KIM MCMAHON

CEO and Co-Founder XandMcMahon
As the President and CEO of McMahon Consulting and the CEO and co-founder of Xand McMahon, Kim brings a wealth of knowledge and experience to her customers. Being in HPC and high-end IT since 1999 with companies such as SGI, Cray, and LSI/NetApp in a range of roles including leading and executing the go-to-market strategy for business units, managing partner relationships, co-marketing, product management, and sales, Kim shapes the strategy and offerings for McMahon Consulting and Xand McMahon while also leading the team on project execution. Working with clients all over the world and in the multiple technologies used in HPC and HPC-like vertical markets, her experience easily translates across borders helping companies bring their products to market in the US – both new products or an expansion into a new territory.

Kim is on the Executive Board of Women in HPC, where she leads the marketing and messaging of the organization, is the Communications lead for the SC17 Inclusivity Committee, manages the social media for HPC Advisory Council, and volunteers with various foundations in the Colorado area to assist with their marketing activities. Kim is a graduate of the University of Northern Colorado with a Bachelor of Science in Accounting / Business Administration.


Lawrence Livermore National Laboratory

Kathryn Mohror is a computer scientist on the Scalability Team at the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory (LLNL) since 2010. Kathryn’s research on high-end computing systems is currently focused on scalable fault tolerant computing and I/O for extreme scale systems. Her other research interests include scalable performance analysis and tuning, parallel programming paradigms, and she leads the Tools Working Group for the MPI Forum. Kathryn received her Ph.D. in Computer Science in 2010, an M.S. in Computer Science in 2004, and a B.S. in Chemistry in 1999 from Portland State University (PSU) in Portland, OR.


Argonne National Laboratory

Misbah Mubarak is a postdoctoral researcher at Argonne National Laboratory. At Argonne, she is part of the data-intensive science group that is working to enable researchers make use of their big data in new scientific advances. . She is the recipient of U.S. Fulbright scholarship, ACM SIGSIM PADS PhD colloquium award and a finalist for Google Anita Borg scholarship. Misbah received her PhD and Masters in computer science from Rensselaer Polytechnic Institute (RPI) in 2015 and 2011 respectively. She also has experience working at CERN, Switzerland and Teradata Corporation.

Misbah has authored or co-authored over 25 papers in the area of performance modeling and analysis for High-Performance Computing. She has served as a peer reviewer for a number of journals including ACM Transactions on Modeling and Simulation, ACM Transactions on parallel computing and IEEE journal of computers. She has served as the technical program committee member of ACM SIGSIM PADS conference in 2016 and 2017, publicity chair for ACM SIGSIM PADS in 2017 and organizing committee member of the women in HPC workshop at Supercomputing (SC) 2016.


General Manager

Jessica Popp is the General Manger for the Infinite Memory Engine (IME) division at Data Direct Networks (DDN). Her division is responsible for the development and support of the IME Burst Buffer product for HPC. Prior to joining DDN, Jessica was the Director of Engineering for the High Performance Data Division at Intel, responsible for the development and support of the Lustre parallel file system. Jessica’s career, spanning more than 20 years has been focused on software development. She started as an application programmer in industry, moved to data warehousing before big data was even a ‘thing’, and then found herself in HPC ten years ago somewhat accidentally as she followed her passion for leading software teams. Having spent much of her career being the only, or one of a very few women on a team, she is excited to see and be a champion for the increased focus on diversity in engineering.


The WHPC workshop at ISC19 would not be possible without a dedicated team of volunteers.


  • Workshop Chair: Toni Collis, EPCC, UK and Women in HPC Network, UK
  • Poster Chair: Misbah Mubarak, Argonne National Laboratory, USA
  • Mentoring Chair: Rebecca Hartman-Baker, NERSC, USA
  • Publicity Chair: Kimberly McMahon, Xand McMahon, USA

Steering and Organisation Committee Members

  • Julia Andrys, Murdoch University, Western Australia
  • Sunita Chandrasekaran, University of Delaware, USA
  • Trish Damkroger, Intel, USA
  • Rebecca Hartman-Baker, NERSC, USA
  • Daniel Holmes, EPCC, UK Adrian Jackson, EPCC, UK
  • Jessica Nettelblad, Uppsala University, Sweden
  • Kimberly McMahon, Xand McMahon, USA
  • Lorna Rivera, University of Illinois, USA

Programme Committee

Posters Chair: Misbah Mubarak (Argonne National Laboratory)

  • Sunita Chandrasekaran, University of Delaware, USA
  • Rebecca Hartman-Baker, NERSC, USA
  • Daniel Holmes, EPCC, UK
  • Adrian Jackson, EPCC, UK
  • Alison Kennedy, Hartree Centre, STFC, UK
  • Lorna Rivera, University of Illinois, USA
  • Jesmin Jahan Tithi, Parallel Computing Lab, Intel Corporation, USA



Lawrence Livermore National Laboratory

Elsa is an application I/O specialist and systems software developer within the Livermore Computing supercomputing center at Lawrence Livermore National Laboratory. She received a Ph.D. in Computer Science from Rensselaer Polytechnic Institute, Troy, NY. Her research interests include software tools for understanding application performance throughout the I/O stack, application checkpointing, and parallel discrete-event simulation.


Center for Bioinformatics and Computational Biology, University of Delaware

Sunita is an Asst. Prof at UDEL, USA and her research spans HPC, exploring parallel algorithms, creating language extensions for parallel programming models, exploring energy efficient computation and migrating legacy code to heterogeneous platforms. She graduated with a PhD from Nanyang Technological University (NTU), singapore on building a high-level software toolchain for targeting FPGA devices.


Oak Ridge National Laboratory , Oak Ridge


Argonne National Laboratory


Lawrence Berkeley National Lab

Lavanya Ramakrishnan is a staff scientist at Lawrence Berkeley National Lab. Her research interests are in software tools for computational and data-intensive science and spans workflow, resource and data management. In recent years, her group has been using user research techniques in both research and development projects. Ramakrishnan has previously served as a group lead for the Usable Software Systems group at LBNL and worked as a research staff member at Renaissance Computing Institute and MCNC in North Carolina. She has masters and doctoral degrees in Computer Science from Indiana University and a bachelor degree in computer engineering from VJTI, University of Mumbai. She joined LBL as an Alvarez Postdoctoral Fellow in 2009.


Argonne National Laboratory


University of Leeds

Joanna Leng has worked in HPC for over 20 years. She has helped to deliver UK academic HPC services varying from a national flagship services, to regional services, to local ones. Her work has often involved research and she has moved between HPC services and academic research departments; she has edited a book and published a range of papers. She has a MSc and PhD in computational science focused on visualization. Her special interests include encouraging new research areas to adopt HPC, encouraging the more intelligent use of visualization, improving innovation by encouraging cross-disciplinary collaboration and the communication of science to the public. She is one of the key organisers of an annual Science, Arts and Maker Festival.

Call for Poster: Closed

Call for posters: deadline for submissions extended to 7 May 2017 (anywhere on Earth)

As part of the workshop we will be inviting submissions from women in industry and academia to present their work as a poster in a supportive environment that promotes the engagement of women in HPC research and applications, providing opportunities for peer to peer networking and the opportunity to interact with female role models and employers.

Submissions are invited on all topics relating to HPC from users and developers. All abstracts should emphasise the computational aspects of the work, such as the facilities used, the challenges that HPC can help address and any remaining challenges etc.

As a poster author you will have the opportunity to share your work with the workshop audience in a brief ‘elevator pitch’ talk. This is followed by a networking session where attendees will have the opportunity to view your poster and discuss your work.

Instructions for authors

Please provide your submission via the Easychair submission system for the

Women in HPC ISC17 Workshop:


Please prepare the following as a single PDF to be uploaded as the paper

for the submission:

  • Full name of main (presenting) author
  • Names of any other authors;
  • Current institution of all authors;
  • Abstract (up to 250 words)
  • Short biography of main (presenting) author (150 words)
  • Photograph of main (presenting) author for website publicity

If you have questions please contact info@womeninhpc.org.

Poster Presenters



Cristin Merritt has focused her career in software adoption in new frontiers. First taking on SAP enterprise implementations in the US, UK, and Middle East and then in convincing accountants to tag up tax returns for the HMRC using a cloud tool. She now works in HPC public cloud adoption, working hand-in-hand with a client base focused on testing the boundaries of HPC and looking for new ways to exploit public cloud for fast, affordable usage. Cristin earned her B.A. in Classics from the University of Florida in 2001, enjoys crochet, and foolishly signs up and runs marathons every now and again.

Abstract: In defense of public cloud: Achieving on-demand accessibility in High Performance Computing (HPC) in single-user, ephemeral projects, through the Alces Gridware Project

Ensuring the end user has access to fast, flexible and collaborative High Performance Computing (HPC) is a chief concern of those who are in the race for producing results in the highly competitive fields of science and engineering. With the advent of public cloud the concept of time to acquisition has shortened dramatically, down to a matter of days if not hours depending on project size. Through the Open Source Alces Gridware Project we created Alces Flight, a product that provides a fully featured, scalable HPC environment for research and scientific computing. Our intent was to create a free tier, aimed at the single user, with hopes to yield results as to how an individual researcher would approach and consume on- demand HPC resource. In the past ten months leaders in commercial manufacturing, cancer treatment, genomics, UAV consulting, and research universities have worked with Alces to create use cases for HPC in the public cloud. Our initial results show:

    • Acquisition of public cloud over traditional HPC server time is startling quick, with a current average time of a day and a half, including overview training of the platform.
    • Ephemeral (temporary) workloads that are embarrassingly parallel work best with auto-scaling HPC clusters, in one project case saving 64% on operational costs.

Alces Flight Compute, Solo Community Edition, yields an 80% reduction in time spent tuning the application to the platform manually.



Marisol Monterrubio-Velasco studied Physics in the National Autonomous University of Mexico (UNAM). In her final project, She studied and characterized the fractal growth of the largest sub-aquatic cave systems in the world. She used the physical model of Limited Aggregation by Diffusion to simulate the growth processes. In 2007 she started her postgraduate studies at the Polytechnic University of Catalonia within the Research program in Computational and Applied Physics. She received her PhD from the Polytechnic University of Catalonia (Spain) in 2013 including a Cum Laude award for her research. From 2014 to 2016 she was a postdoctoral researcher at Geosciences center, UNAM. At present she has a postdoc fellowship in the Barcelona Supercomputing Center thanks to a Mexican council for Science and Technology. Her research interests focus on computational physics, complex networks, statistical analysis and numerical simulation applied to earthquakes.

Abstract: Earthquakes simulation by using the Fiber Bundle model and Machine learning techniques

Earthquakes are the result of rupture processes in the Earth’s crust produced by a sudden energy release of the long-term accumulated stresses. Although we have knowledge about the occurrence of certain major earthquakes, our observational span is too short to be able to draw strong (predictive) conclusions about when, where and how big the next earthquake will be. Computational physics offers alternative ways to study the rupture process in the Earth’s crust by generating synthetic seismic data using physical and statistical models. The Fiber Bundle model (FBM), which describes the complex rupture processes in heterogeneous materials in a wide range of phenomena, has shown the ability to generate data that depicts the main characteristics of real seismicity [Moreno et al,. 2001; Monterrubio et al., 2015]. In order to obtain statistical significance and an appropriate parametric study, a large number of simulations is required. High-performance computing (HPC) combined with Machine Learning (ML) techniques provide a good ground base to perform and improve the simulations, the data management process and data analysis. This study includes the analysis of the relationship between the initial parameters of the simulations and the resulting characteristics exhibited in the aftershock simulation as the magnitude-frequency relation [Gutenberg and Richter, 1954], temporal decay behavior [Utsu, 1995] and spatial distribution [King et al., 1994]. Pattern recognition techniques are used to identify patterns in different scenarios. Lastly, classification of seismic events is accomplished through unsupervised clustering methods.



Ruth Schöbel is a PhD student at Jülich Supercomputing Centre since one year. Together with the Technical University of Dresden, Germany, she is working in the field of parallel-in- time integration methods for partial differential equations. As a member of the German project ParaPhase she is developing space-time parallel adaptive algorithms for phase- field models on HPC architectures, wherein her personal focus is the innovative
parallelization in the temporal direction. She already holds a Masters degree in Mathematics from the RWTH Aachen, and is also trained as a mathematical-technical software developer by the Forschungszentrum Jülich. She is specialized in numerical mathematics and has developed a particular interest in semi-implicit time-stepping methods for differential equations.

Abstract: Parallel-in-Time Integration with PFASST

The efficient use of modern high performance computing systems for solving space-time- dependent differential equations has become one of the key challenges in computational science. Exploiting the exponentially growing number of processors using traditional techniques for spatial parallelism becomes problematic when, for example, for a fixed problem size communication costs begin to dominate or for increased spatial resolution more time-steps are necessary due to stability constraints. Parallel-in-time integration (“PinT”) methods have recently been shown to provide a promising way to extend these prevailing scaling limits.

One promising PinT algorithm, the “parallel full approximation scheme in space and time” (PFASST), is an iterative, multilevel strategy for parallelization in the temporal dimension. For our experiments, we develop a software module called dune-PFASST, a C++ implementation using the “Distributed and Unified Numerics Environment”, which is a framework for solving PDEs with grid-based methods. Operating with this software we will report on our experience using PFASST in time together with finite elements in space and show first results. This work is conducted within the German project “ParaPhase”, where the final goal is the development of a highly scalable space-time parallel adaptive algorithm for the simulation of phase-field models. The open-source software dune- PFASST also contains various flavors of spectral deferred correction methods and will be made available online soon.



Martina graduated 2010 in statistics at the University of Vienna. She then decided to do her Masters degree in Technical Mathematics at the University of Innsbruck where she received her Masters Degree in 2013. Her Master thesis was written in the field of numerical analysis, which fostered her interest in the development of numerical methods as well as their implementation.

An internship at Lawrence Livermore National Laboratory in the following summer introduced her to high performance computing. Since 2014, her PhD thesis is funded by the Vienna Scientific Cluster (VSC) school, which gave her the opportunity to work an various projects with the focus of implementing numerical methods on HPC hardware. In addition, Martina Prugger spend a year in New Orleans working with Prof. Kurganov on numerical methods and has an active collaboration with the high performance computing group at SIMULA in Oslo.

Abstract: Numerical methods and their implementation on HPC hardware

Performing a numerical simulation on a supercomputer requires both an efficient parallelization as well as a good numerical algorithm. In this work we consider both of these aspects in the context of a fluid dynamics code.

More specifically, we introduce a numerical second order method to solve the Euler equations of gas dynamics in two spatial dimensions. This approach does not rely on dimension splitting and is consequently able to better resolve the numerical solution.

Furthermore, we investigate the feasibility of the programming paradigm Unified Parallel C (UPC), (a partitioned global address space (PGAS) extension to C) from a code developing point of view. Due to the shared memory model of PGAS languages, parallelizing a code is easier compared to using MPI. However, it is not clear how ease of development can be reconciled with performance. We present a UPC implementation of a Godunov solver to represent a compute bound real live problem and compare various optimized stages of the parallelization to an MPI implementation of the same sequential code. Furthermore, we introduce a sparse matrix vector multiplication on highly unstructured data that is taken from a simulation of the human heart and is a memory bound problem.



Carla Osthoff is a researcher at National Laboratory for Scientific Computing (LNCC- Brazil), she received her B.S. in Electronics Engineering from PUC-RJ, a M.S. and a Ph.D. in Computer Science from COPPE / UFRJ. She have been working in high performance computing since 1985, first on parallel multiprocessor hardware projects, followed by distributed shared memory systems development. Currently, she works on High Performance Computing Research as a professor from National Laboratory of Scientific Computing Multidisciplinary Postgraduate Program and coordinates the National Center for High Performance Processing (CENAPAD) from LNCC. Currently topics of interest are distributed systems, high performance computing, parallel processing, programming models and scientific computing.

Abstract: K-mer frequency counting software based on a hybrid GPU cluster environment

Although metagenomics is a new area in science, in recent years there has been an explosion in computational methods applied to metagenome. The development of sequencing technology has exponentially increased the amount of data from sequencing files from genetic samples and consequently analysis complexity. Most of tools used to compress sequencing data employs k-mers algorithms, which collects all possible k length combinations from a data sequence. Due to their combinatorial nature, k-mers algorithms demands a lot of processing power. This work presents the development and performance analysis form a deterministic k-mer accounting algorithm for a GPGPU computer server based architecture, CFRK, which can be considered a low- cost, high-performance computing platform. After the validation tests, we demonstrate that CFRK is more efficient than k-mer state of the art algorithm
Jellyfish, based on multicore shared memory computing platform, for k values below 4. We also present the development from a CFRK extension for hybrid GPU clusters environments, MCFRK, based on MPI library, that improves CFRK performance and capacity to process larger files and presents better performance than Jellyfish for k-mer values most used for metagenome analysis.



Radita is a first year master’s student in Computer Science at University of Tartu, Estonia. She received her bachelor degree in Information System from Duta Wacana Christian University, Indonesia. She is currently taking High Performance Computing specialization and having interest in scalable web application.

Abstract: Parallel Cuckoo Hashing Implementation using Node JS Cluster Module

In recent years, Node.js server has gained increasing popularity as a web server due to its configuration simplicity and capability to handle high traffic web applications. A single instance of Node.js runs in a single thread but it is also capable of taking advantage of multi-core systems by creating child processes using Node JS Cluster module.

In this work we demonstrate this parallel capability of the Node JS Cluster module by building hash tables using Parallel Cuckoo Hashing algorithm. Hash table creation in web applications, so far, has been a common way to store data from the database query or from other sources. Hash table data structure enables easier and faster data access by the web application. Usually, Cuckoo hashing is used to resolve collision from hash table insertion by evicting conflicting values and moving them to other hash tables. It’s very easy to implement and quite efficient in practice.

Produced results from this implementation shows that a hash table that is built using Parallel Cuckoo Hashing can be built at a rate of around 0.28 – 1,64 times faster than its serial implementation. It depends on the size of data inserted. The bigger the data size, the more effective it gets. Its performance is also affected by number of cores on the system. The effective number of child processes that can be created follows the number of cores on the system.
Project Repository: https://github.com/raymerta/parallel-nodejs-benchmark



Pilar Gomez is a Research Fellow at the Universidad Autonoma de Barcelona (UAB) where she is doing a PhD in Computer Science with focus on Parallel I/O for High Performance Computing Systems. She has co-authored four full- reviewed technical papers in journals and conference proceedings. Since 2007, she is a part-time professor at the UAB and she has worked during 7 years as a Software Engineer for several companies.

Abstract: Cloud platform as Test-Bed System for analyzing the scientific parallel applications

Currently, the use of public cloud platforms as infrastructure has been gaining popularity in many scientific areas and HPC is no exception. Unlike on tradi- tional HPC platforms, on a virtual cluster, users are their own administrators, making it easy to change the I/O system configuration. We present a method- ology to use virtual HPC systems, deployed in a cloud platform, as a Test-Bed system for evaluating and detecting performance inefficiencies in the I/O sub- system, and for taking decisions about the configuration parameters that have influence on the performance of an application, without compromising the per- formance of the production HPC system. The parameters of our I/O behavior models PIOM-MP and PIOM-PX are applied to obtain the I/O kernel of the par- allel application. The I/O kernel is replicated by using the IOR benchmark and executed in virtual HPC clusters to evaluate the I/O time and the bandwidth in different I/O system configurations. To deploy the virtual I/O subsystem, we have developed a plug-in for the StarCluster tool, which allows us to deploy the PVFS2 parallel file system quickly and automatically. Our experimental valida- tion indicates that virtual HPC clusters are a quick and easy solution for system administrators, for analyzing the impact of the I/O system on the I/O kernels of the parallel applications and for taking performance decisions in a controlled environment.

All ISC 2017 Events


Sun 18, Jun, 2017 - Thu 22, Jun, 2017
All Day
Marriott Hotel, Frankfurt
Hamburger Allee 2
Frankfurt | Germany


Tue 20, Jun, 2017
12:30 pm - 1:45 pm
Marriott Hotel, Frankfurt
Hamburger Allee 2
Frankfurt | Germany
Tue 20, Jun, 2017
5:00 pm - 6:00 pm
Messe Frankfurt, Convention Centre
Osloer Str. 5
Frankfurt | Germany


Wed 21, Jun, 2017
10:00 am - 10:30 am
Messe Frankfurt, Convention Centre
Osloer Str. 5
Frankfurt | Germany
Wed 21, Jun, 2017
12:00 pm - 12:30 pm
Messe Frankfurt, Convention Centre
Osloer Str. 5
Frankfurt | Germany
Wed 21, Jun, 2017
1:45 pm - 2:45 pm
Messe Frankfurt, Convention Centre
Osloer Str. 5
Frankfurt | Germany
Wed 21, Jun, 2017
3:30 pm - 4:00 pm
Messe Frankfurt, Convention Centre
Osloer Str. 5
Frankfurt | Germany


Thu 22, Jun, 2017
9:00 am - 1:00 pm
Marriott Hotel, Frankfurt
Hamburger Allee 2
Frankfurt | Germany
Loading Map....