The 3rd International Workshop on Machine Learning for Software Hardware Co-Design (MLSH'23)
October 22nd, 2023
In conjunction with PACT'23 (Vienna, Austria)
Important Dates
- Paper submission: August 11th (AOE), 2023
- Paper notification: August 28th, 2023
- Camera-ready: September 25th, 2023
- Workshop: October 22nd, 2023
Overview
As Machine Learning (ML) continues to permeate all areas of computing, software system designers and software stack developers are adopting ML solutions and designs to solve challenging problems presented in their areas; especially in areas like optimization and hardware design. ML is increasingly being used to solve a diverse set of problems such as the design of cost models, code optimization heuristics, efficient search space exploration, automatic optimization, and program synthesis. Designing accurate machine learning models, feature engineering, verification, and validation of obtained results and selecting and curating representative training data are all examples of challenging but important problems in this area that are actively being explored by a large community of researchers in industry and academia. This workshop provides a great venue for the international research community to share ideas and techniques to apply machine learning to system challenges with a focus on the software stack and hardware.
Scope
We will solicit papers on topics including, but not limited to, the following areas:
- ML for the software stack
- Heuristics and cost model construction.
- Optimization space exploration.
- Automatic code optimization.
- Bug detection.
- Program synthesis.
- Program and code representation.
- Important training paradigms.
- ML for hardware
- ML models for optimal configuration for FPGA.
- Load balancing between CPU and accelerators (e.g. GPUs, TPUs, etc).
- ML models to improve computer architecture design.
- Analysis and techniques to define meaningful representation (features) for compilers and hardware.
- Training data
- Exploring the availability or generation of efficient training data for compilers and hardware.
- Utilizing graph-based data for machine learning.
Submission Guidelines
We invite both full-length research papers and short research papers.
The submitted paper should not exceed the page limit (8 pages for full-length and 4 pages for short papers) and should follow the IEEE conference proceedings templates.
The page limit applies to all content NOT including references, and there is no page limit for references.
The submission will be reviewed by at least three program committee members and should not have been published in or under review for another venue.
Accepted papers will be published in our online proceedings.
Submit your paper using this link.
Program
October 22nd at 1:30pm.
Time | Presentation |
---|---|
1:30pm-1:35pm | Opening Notes. |
1:35pm-2:05pm | GRANITE: A Graph Neural Network Model for Basic Block Throughput Estimation. Ondrej Sykora (Google). |
2:05pm - 2:35PM | DeepOPT: Single-Shot Code Optimization Through Deep Learning. Afif Boudaoud, Smail Kourta, Massinissa Merouani and Riyadh Baghdadi (New York University Abu Dhabi). |
2:35pm - 3:05pm | Guiding a Polyhedral Autoscheduler using a Deep-learning Based Cost Model. Massinissa Merouani, Khaled Afif Boudaoud, Hugh Leather, Riyadh Baghdadi (New York University Abu Dhabi and Meta). |
3:05pm - 3:30pm | Break. |
3:30pm - 4:00pm | Improving Agile Hardware Design Tools with Artificial Intelligence. Antonino Tumeo (PNNL). |
4:00pm - 4:30pm | Automaton and Cognition: Programming Model for (auto-regressive) Language Models. Tristan Vanderbruggen (LLNL). |
Program Committee
- Abid Malik (Brookhaven National Lab)
- Hugh Leather (Meta)
- Sameer Abu Asal (Meta)
- Sridutt Bhalachandra (Lawrence Berkeley National Laboratory)
- Zheng Wang (University of Leeds)
Past Editions
Organizers
- Eun Jung (EJ) Park (Qualcomm Inc).
- Riyadh Baghdadi (New York University Abu Dhabi and Massachusetts Institute of Technology).
- Joseph Manzano (Pacific Northwest National Laboratory).
- Joshua Suetterlein (Pacific Northwest National Laboratory).