Overview

The first Young Architect Workshop (YArch ’19, pronounced “why arch”) will provide a forum for junior graduate students studying computer architecture and related fields to present early stage or on-going work and receive constructive feedback from experts in the field as well as from their peers. Students will also receive mentoring opportunities in the form of keynote talks, a panel discussion geared toward grooming young architects, and 1-on-1 meetings with established architects.

Students will receive feedback from experts both about their research topic in general and more specifically, their research directions. Students will also have an opportunity to receive valuable career advice from leaders in the field and to network with their peers and develop long-lasting, community-wide relationships.

Mechanics

The central theme of this workshop is to serve as a welcoming venue for junior graduate students to present their ongoing work and receive feedback from experts within the community. In addition, this workshop also aims to help early-stage graduate students in building connections both with their peers and established architects in the community. To this end, YArch will include:

Route to Top-tier: Each submitted work will receive two or more expert reviews. The aim of these reviews will be to give early guidance on important boxes to check for the submitted work to be a future successful top tier conference paper.

Meet an Architect: As part of the workshop, attendees will be paired with experts in their chosen research area to get feedback on their ongoing work and future research directions.

Becoming an Architect: The workshop will include keynote talks from academic and industry leaders specifically geared towards early stage graduate students.

Ask an Architect: The workshop will include a panel of established architects in industry and academia from whom students can seek career advice.

 Program

A mobile video of the entire workshop can be found here.

Photos from the workshop can be found here.

1:00 – 1:15pm | Opening remarks

1:15 – 2:00pm | Keynote – 1

Research — To Boldly Go Where No One Has Gone Before
Srilatha (Bobbie) Manne (Microsoft)

2:00 – 2:35pm | Student presentations – 1

RSVP: A hybrid model of Register Sharing and Value Prediction
Kleovoulos Kalaitzidis (INRIA)

MLC STT-RAM for Deep Neural Networks Accelerator
Masoomehsadat Jasemi (Univ. of California, Irvine)

A Study of Perfect Cache Management
Gino Chacon (Texas A&M Univ.)

Pooneh Compression: A Simple Last Level Cache Compression for CMPs
Pooneh Safayenikoo (Univ. of Missouri)

Rethinking Resource Disaggregation
Muhammad Talha Imran (Penn State Univ.)

2:35-2:45pm | Break

2:45-3:30pm | Keynote – 2

The “Job Talk” Talk: Preparing for the Faculty Interview Process
Thomas Wenisch (Univ. of Michigan)

3:30-4:05pm | Student presentations – 2

Delivering Correct and Fast Persistency Guarantees
Sara Mahdizadeh Shahri (Penn State Univ.)

Exploring GPU Architectural Optimizations for RNNs
Suchita Pati (Univ. of Wisconsin)

Enhancing Programmable Accelerators for Sparsity
Vidushi Dadu (Univ. of California, Los Angeles)

Collaborative Parallelization Framework
Ziyang Xu and Greg Chan (Princeton Univ.)

Data-Aware Reconfigurable DNN Accelerator
Pedram Zamirai and Armand Behroozi (Univ. of Michigan)

4:05 – 5:00pm | Panel

Demystifying grad school
Rajeev Balasubramonian (Univ. of Utah), Nuwan Jayasena (AMD Research), David Nellans (NVIDIA Research), Carole-Jean Wu (Facebook / Arizona State Univ.), and Samira Khan (Univ. of Virginia)

5:00 – 5:05pm – Closing remarks

6:30 – 7:30pm – Student poster session (co-located with conference reception)

Keynotes

Keynote-1: Research — To Boldly Go Where No One Has Gone Before

Speaker: Srilatha (Bobbie) Manne

Abstract: The road to success in grad school is murky at best. There is no one clear path to follow, no one formula for success. However, there are steps one can take to make forward progress, learn along the way, and find the topic or passion that will guide you to graduation. In this talk, I will cover some of the basic tenets of computer architecture research from how to find a good research topic, how to hone your skills and take best advantage of the resources available,  and  how to sustain that passion through your PhD and beyond.  There is no rule book to surviving grad school, but hopefully this talk will provide some tools you could use along the way.

Bio: Srilatha (Bobbie) Manne received her PhD from the University of Colorado, Boulder in 1999. Her thesis focused on high performance, low power processor design. In the past two decades, she has worked at both industrial research labs and product teams at Compaq, Intel AMD, and Cavium. Most recently, she moved to the AI and Advanced Architectures team at Microsoft. Her work has focused on all aspects of processor design from performance to power to reliability. She has over three dozen patents granted or pending, and is the General Chair of ISCA 2019. Srilatha lives in Seattle with her husband of 20 years and two children.

Keynote-2: The “Job Talk” Talk: Preparing for the Faculty Interview Process

Speaker: Prof. Thomas F. Wenisch

Abstract: Interviewing for a faculty position can be a mysterious and intimidating process. In this talk, I tell you what to expect and how to prepare for interviews at top academic institutions. Furthermore, I discuss how to structure and prepare your “Job Talk,” the pivotal component of a successful interview. This talk is targeted at anyone who is considering a career as an academic, whether a first year graduate student or a PhD candidate planning to interview this year.

Bio: Thomas F. Wenisch is an Associate Professor and Associate Chair for External Affairs of Computer Science and Engineering at the University of Michigan. He received his PhD in electrical and computer engineering from Carnegie Mellon University in 2007 and joined the faculty at Michigan that year.  His research is focused on computer architecture with particular emphasis on server and data center systems, memory persistency, multiprocessor and multicore systems, performance evaluation methodology, and medical imaging.

Panel

Panelist: Rajeev Balasubramonian

Affiliation: Professor, University of Utah

Interests: Memory systems, security, application-specific architectures for genomics and machine learning

Bio: Rajeev Balasubramonian is a Professor at the School of Computing, University of Utah.  He received his B.Tech in Computer Science and Engineering from the Indian Institute of Technology, Bombay in 1998. He received his MS (2000) and Ph.D. (2003) degrees from the University of Rochester.  His primary research interests include memory systems, security, and application-specific architectures.  Prof. Balasubramonian is a recipient of a US National Science Foundation CAREER award, an IBM Faculty Partnership award, an HP Innovation Research Program award, an Intel Outstanding Research Award, various teaching awards at the University of Utah, and multiple best paper awards.

Name: Nuwan Jayasena

Affiliation: Principal MTS, AMD Research

Interests: Memory hierarchy, processing in memory, heterogeneous computing, accelerators, run-time systems, emerging applications, data structures

Bio: Nuwan joined AMD Research in 2009 and has led research efforts on various topics including data movement reduction, processing in memory, multi-level memory, and heterogeneous computing. Prior to joining AMD, Nuwan was an architect at Stream Processors, Inc. and at Nvidia. Nuwan holds a Ph.D. and an M.S. from Stanford University and a B.S. from the University of Southern California. He is happiest when he can’t tell whether he’s working on hardware or software.

Panelist: David Nellans

Affiliation: Manager, NVIDIA Research

Interests: Computer architecture, operating systems, data center scaling and efficiency

Bio: David Nellans currently manages the System Architecture Research group at NVIDIA where he has been for the last 6 years. His team provides co-designed SW and HW solutions to enable scalable GPU performance and energy efficiency in a post-Moore’s world. By definition, system architecture and design spans multiple sub-specialties and his team looks for novel solutions that can vary from circuits to software. Prior to NVIDIA, Dave was an early technical manager at Fusion-IO helping the company grow from 50 to 1000 employees, while successfully deploying the first PCIe attached NAND-Flash systems within Facebook and Apple data centers before going public in 2011.

 

Panelist: Carole-Jean Wu

Affiliation: Research Scientist, Facebook/Associate Professor, ASU

Interests: Software and system stack design and optimization for domain-specific workloads, high-performance and energy-efficient heterogeneous architecture, performance quality modeling and energy efficiency optimization for mobile, energy harvesting for emerging computing devices

Bio: Carole-Jean Wu is an Associate Professor in Computer Science and Engineering at Arizona State University. She is spending her sabbatical as a research scientist with Facebook’s AI Infrastructure Research. Her research interests include high-performance and energy-efficient computer architecture through hardware heterogeneity, energy harvesting techniques for emerging computing devices, temperature and energy management for portable electronics, and memory subsystem designs. More recently, her research has pivoted into designing systems for machine learning. Carole-Jean is the Program Chair of the 2018 IISWC. She also co-chairs the MLPerf Inference WG. She received her M.A. and Ph.D. degrees in Electrical Engineering from Princeton University and completed a B.Sc. degree in Electrical and Computer Engineering from Cornell University.

Panel Moderator: Samira Khan

Affiliation: Assistant Professor, University of Virginia

Interests: Emerging technology,  architecture and system support

Bio: Samira Khan is an Assistant Professor at the University of Virginia (UVa), where she leads research on building cross-layer system support for emerging technologies. Prior to joining UVa, she was a Post Doctoral Researcher at Carnegie Mellon University, funded by Intel Labs. She received her PhD from the University of Texas at San Antonio. During her graduate studies, she worked at Intel, AMD, and EPFL.

Meet the student presenters

Name: Armand Behroozi

Affiliation: Univ. of Michigan | Advisor: Scott Mahlke

YArch Presentation: Data-Aware Reconfigurable DNN Accelerator

Bio: Armand did his bachelor’s at UT Austin and is now a 1st year PhD student. His research interests are in reconfigurable computing for machine learning and graph applications.

Name: Gino A. Chacon

Affiliation: Texas A&M Univ. | Advisor: Paul A. Gratz

YArch Presentation: A Study of Perfect Cache Management

Bio: Gino Chacon is 3rd year PhD computer engineering student attending Texas A&M University in his home state of Texas. He’s a part of the Computer Architecture, Memory Systems, and Interconnection Networks (CAMSIN) research group where he explores optimizations for the memory hierarchy. His research interests are mainly on speculative cache management, simulation of modern memory systems, and machine learning applied to computer architecture.

Name: Greg Chan

Affiliation: Princeton Univ. | Advisor: David August

YArch Presentation: Collaborative Parallelization Framework

Bio: I am a first year Ph.D. student working with Professor David I. August on auto-parallelizing compilers. I’m interested in compiler performance as well as computer architecture. I attended Northeastern University for my undergraduate degree and performed some work on embedded systems.

Name: Vidushi Dadu

Affiliation: Univ. of California, Los Angeles | Advisor: Tony Nowatzki

YArch Presentation: Enhancing Programmable Accelerators for Sparsity

Bio: I am a second year PhD student working with Prof. Tony Nowatzki at UCLA. I am interested in co-designing hardware/software for extracting better performance. There has been a wide spectrum of research targeted towards this goal. At one extreme, there are application specific accelerators which are not only energy efficient but also very fast as they are designed for the application at hand. On the other extreme, there are general purpose processors, which unfortunately due to their generality are many times too slow and energy inefficient to be useful in real-world applications. Bridging this gap, is our research agenda.

Name: Muhammad Talha Imran

Affiliation: Penn State Univ. | Advisor: Aasheesh Kolli

YArch Presentation: Rethinking Resource Disaggregation

Bio: Talha is a PhD student at Penn State CSE. He is exploring design of Disaggregated Memory Systems housing heterogeneous memory and compute units in modern datacenters. He received a Bachelors of Electrical Engineering (silver medal) in 2015 from National University of Science and Technology (NUST), Pakistan. Subsequently he worked in industry for three years before embarking upon pursuing his PhD. He has long been interested in bridging the hardware and software worlds. This has led him to pursue robotics in undergrad, followed by developing JTAG interfaces with embedded boards at Mentor – a Siemens Business, leading up to his current endeavor with Computer Systems and Architecture.

Name: Kleovoulos Kalaitzidis

Affiliation: Inria Rennes – Bretagne Atlantique | Advisor: André Seznec

YArch presentation: RSVP: A hybrid model of Register Sharing and Value Prediction

Bio: I received my diploma in Computer Engineering from the University of Thessaly in Greece in 2014 (5-years degree), where my thesis focused on high performance, low power processor design. Between 2014 and 2016 I worked both in the industry and in the Informatics Center of the Hellenic Army during my military service. I joined the PACAP research team in 2016 and I am currently a Ph.D candidate under the supervision of André Seznec. My research revolves around core microarchitecture and increase of sequential performance, by exploring speculation techniques that allow for indirect expanding of the issue width in superscalar processors.

Name: Masoomeh Sadat Jasemi

Affiliation: Sharif University of Technology/University of California, Irvine | Advisors:  Shaahin hessabi, Nader Bagherzadeh

YArch Presentation: MLC STT-RAM for Deep Neural Networks Accelerator

Bio: Masoomeh Sadat Jasemi is research scholar in EECE department of University of California, Irvine. She is a Ph.D. student in CE department at the Sharif University of Technology, Iran.She received his B.Sc. degree from Razi University of Kermanshah, Iran and she was a member of Institute for research in Fundamental Science (IPM). Her current research interests include Accelerator based architecture, memory systems, multicore and parallel computing, and heterogeneous architectures. Currently, she is working on mitigating memory bottleneck in deep neural networks (DNN) accelerator based architecture.

 

Name: Sara Mahdizadeh Shahri

Affiliation: Penn State Univ. | Advisor: Aasheesh Kolli

YArch Presentation: Delivering Correct and Fast Persistency Guarantees

Bio: I am a Graduate Research Assistant at Penn State CSE. I am currently working on challenges that have arisen with the emerge of persistent memories under the supervision of Dr. Kolli. I have received my B.Sc in Computer Engineering from the Sharif University of Technology in 2018.

Name: Suchita Pati

Affiliation: Univ. of Wisconsin | Advisor: Matt Sinclair

YArch Presentation: Exploring GPU Architectural Optimizations for RNNs

Bio: Suchita Pati is a second-year graduate student in the Department of Computer Sciences at the University of Wisconsin–Madison. She is advised by Prof. Matthew D. Sinclair on optimizing GPU architectures for Deep Neural Networks and she has been an active part of the effort to augment GPGPU-Sim, a widely-used GPU simulator, to execute Deep Learning kernels. She is actively involved with the ACM-W’s UW-Madison Chapter, W-ACM. Before graduate school she worked at AMD, India with the Server Performance Group on characterizing server and datacenter workloads. She holds a bachelor’s degree in Electrical and Electronics Engineering from BITS-Pilani, India.

Name: Pooneh Safayanikoo

Affiliation: Univ. of Missouri | Advisor: Ismail Akturk

YArch Presentation: Pooneh Compression: A Simple Last Level Cache Compression for CMPs

Bio: Pooneh Safayenikoo received the B.Sc. degree from Ferdowsi University of Mashhad, Mashhad, Iran, in 2014, the M.Sc. degree from Iran University of Science and Technology, Tehran, Iran, in 2016, and she is a Ph.D. student at University of Missouri since 2018, all in computer engineering. Her current research interests include computer architecture, memory systems, machine learning, and low-power design and architecture. Her email address is ps4h7@mail.missouri.edu.

Name: Ziyang Xu

Affiliation: Princeton Univ. | Advisor: David August

YArch Presentation: Collaborative Parallelization Framework

Bio: I’m a first year Ph.D. student in the Department of Computer Science at Princeton University. I am a member of the Liberty Research group, led by Prof. David I. August. My interests are in computer architecture and compilers and with a focus on automatic parallelism. I received a Bachelor of Science degree in Computer Science from Peking University in 2018, where I worked with Prof. Yun Liang.

Name: Pedram Zamirai

Affiliation: Univ. of Michigan | Advisor: Scott Mahlke

YArch Presentation: Data-Aware Reconfigurable DNN Accelerator

Bio: Pedram Zamirai is a Ph.D. pre-candidate in Computer Science and Engineering (CSE) department at University of Michigan. His main research interests include computer architecture, compilers, and neural networks. He conducted research on data aware precision customization of computations since 2017 under supervision of Prof. Scott Mahlke. Currently, he is working on dynamically reconfigurable deep neural network (DNN) accelerators. He received his Bachelor’s degree from the Amirkabir University of Technology, Iran in Electrical Engineering.

Sponsors

 

 

Submit

Eligibility: Applicants must be graduate students (Ph.D or Masters) in computer architecture and related fields who have completed less than 3 years of graduate school (Masters and/or PhD) at the time of the workshop. A note from the student’s research advisor attesting this is required as part of the submission.

Call for Submissions: Eligible students are invited to submit their early stage or on-going work to this workshop. Submitted work should not have been presented as part of a prior ACM/IEEE conference.

The workshop invites papers from all areas of computer architecture, broadly defined. Topics of interest include, but not limited to:
– Datacenter systems
– Hardware acceleration
– Memory hierarchy
– Virtualization
– Security
– Microarchitecture
– GPUs
– Parallel architectures
– Emerging technologies

Note: This workshop is not a venue for publication and there will be no formal proceedings.

Submission guidelines: The goal of this workshop is to help students think about a problem/idea in an holistic manner and communicate your ideas to the wider community, so that we can provide some valuable early-stage feedback. To this end, we encourage you to cover the following aspects in your submission:

Scope of problem/idea: Provide clear context for and scope of the problem(s) or idea(s) you intend to work on. This will likely form the basis of the introduction/background sections of your future work(s).

Solution: Provide an overview of the design and implementation aspects of your solution(s) to the problem(s) described above. Given this is on-going work, focus more on providing breadth than depth. For example, beside describing the design of your idea, enlist the various system aspects which your proposed solutions will affect (e.g. does your proposed solution affect coherence protocols?) and that if you plan to discuss these effects in your future submission(s).

Evaluation methodology: Discuss the evaluation methodology you plan to adopt to test the efficacy of your ideas. For example, the workloads that you plan to use, the tools you’ll employ (e.g., architectural simulator, real world experiments, FPGA prototypes), etc.

Related work: This can be the traditional related work section. Please specify if you plan to quantitatively compare against some prior work. 

Submission details
– Submissions must be PDF files, in 2-column, single-spaced, 10pt format.
– Submissions must be at most 2 pages long, not including references.
Submissions are double-blind. Please do not have any author identifying information in the paper submitted.
– Please have your research advisor send the workshop organizers an email with the following subject line “<Your name> meets YArch’19 eligibility requirements” to “yarchhpca19@gmail.com”.
– Submission site: Link
– Submission deadline: 11:59pm (PT), 15th January, 2019 (Tuesday)

Declaring conflicts: When registering a submission, all its co-authors must provide information about conflicts with the YArch ’19 program committee members. You are conflicted with a member if: (1) you are currently employed at the same institution, have been previously employed at the same institution within the past two years (2016 or later), or are going to begin employment at the same institution; (2) you have a past or present association as thesis advisor or advisee (no time limit); (3) you have collaborated on a project, publication, grant proposal, or editorship within the past two years (2016 or later); or (4) you have spouse or first-degree relative relations.

 

Committees

Organizers:

Shaizeen Aga, AMD Research

Aasheesh Kolli, Pennsylvania State University

Program Committee:

Arkaprava Basu, Indian Institute of Sciences

Niladrish Chatterjee, NVIDIA

Rangeen Basu Roy Chowdhury, Intel

Jason Clemons, NVIDIA

Joe Devietti, Univ. of Pennsylvania

Natalie Enright Jerger, Univ. of Toronto

Christopher Fletcher, Univ. of Illinois at Urbana-Champaign

Jayneel Gandhi, VMware Research

Saugata Ghose, Carnegie Mellon University

Akanksha Jain, Univ. of Texas at Austin

Nuwan Jayasena, AMD Research

Onur Kayiran, AMD Research

Samira Khan, Univ. of Virginia

Tushar Krishna, Georgia Tech

Benjamin Lee, Duke University

Andrew Lukefahr, Indiana University

Prashant Nair, Univ. of British Columbia

Gokul Ravi, Univ. of Wisconsin

Adrian Sampson, Cornell

Joshua San Miguel, Univ. of Wisconsin

Sophia Shao, NVIDIA

Abhayendra Singh, Google

Ashish Venkat, Univ. of Virginia

Vivek Venugopalan, USC

Rujia Wang, Illinois Institute of Technology

Daniel Wong, Univ. of California at Riverside

Salessawi Ferede Yitbarek, Intel Labs

Jishen Zhao, Univ. of California at San Diego

Yuhao Zhu, Univ. of Rochester