Advancing Life & Work
Curt_Hopkins

Suparna Bhattacharya on the future of AI and being elected a Fellow of the INAE

Suparna Bhattacharya, Distinguished Technologist at Hewlett Packard Labs, has been a trailblazer for women in technology and AI. She was recently elected as a Fellow of the Indian National Academy of Engineering (INAE), was recognized with the India-level Zinnov Next Generation Women Leaders Award in 2019, and won the HPE Women’s Excellence Award in 2017. She also has served the external systems community through her participation in various program committees for several international conferences.

 

Suparna BhattacharyaSuparna BhattacharyaThere’s no doubt that in her more than six years with HPE, Suparna Bhattacharya has made significant contributions to technology and architecture advancements, which include twenty inventions filed in the areas of intelligent storage and data for containers, AI, and edge-to-core data optimization. She’s also recently published her book Resource Proportional Software Design for Emerging Systems.

 We caught up with Suparna Bhattacharya on what she’s been working on lately and where she sees the future of AI.

 

 

 

What is the INAE and what does being a Fellow mean to you?
INAE promotes engineering, technology, and the related sciences for their application to solving problems of national importance. Up to fifty INAE Fellows per year across all engineering disciplines are elected by peer committees in recognition of their personal achievements, of which I was one of the two fellows elected this year from the industry in the discipline of Computer Engineering and Information Technology.

 I feel honored and inspired by the academy’s confidence in my contributions to further this mission, as the Indian National Academy of Engineering (INAE) is made up of India’s most distinguished engineers, engineer-scientists, and technologists.

 

You recently wrote a book titled Resource Proportional Software Design for Emerging Systems. What is it about and why did you choose to write this book?
The idea for this book, co-authored with Doug Voigt and Prof K. Gopinath, was originally inspired by the connections between my PhD thesis on the problem of software bloat and the parallels we later observed in the challenges of adapting existing software stacks to work efficiently with persistent memory. Paradoxically, as the latencies of bottleneck components, such as storage and networks, have dropped by up to four orders of magnitude over the years, software path lengths have progressively increased as a side effect of framework-based layering—so much so that it can overshadow the benefits from switching to new technologies like persistent memory and low-latency interconnects.

 Resource proportional design (RPD) offers a principled approach to create software and systems that are more efficient, more adaptable, and easier to use in a changing technological environment. RPD counters overheads of deep layering, making resource consumption proportional to situational utility without removing flexibility or ease of development.

 

What do you do at HPE as a Distinguished Technologist? And what are you currently working on?
Over the last six years as a Distinguished Technologist at HPE, I have held various roles in Hewlett Packard Labs, in the HPE Hyperconverged Architecture Group, and in the HPE Storage Chief Technologist’s Office. I have worked to advance the technical strategies for HPE and product architectures for emerging technologies, such as artificial intelligence, containers, persistent memory, and IoT edge-to-core computing. I enjoy blending insights from diverse technical domains to explore innovations that span technology boundaries. Earlier this year, I took up the role in the AI Research Lab at Hewlett Packard Labs to establish and lead our research agenda on an intelligent data foundation for AI.

 

What are the biggest things about AI that people may not be thinking about, but should be?
The field of AI has progressed at a fascinating pace. The potential of AI is being realized at much broader levels than one could have imagined even a few years ago—which also opens a fresh slew of challenging problems that must be solved!

 First, with all the focus on advancing AI models, algorithms, and computational optimizations, it’s easy to forget that the outcomes are ultimately limited by the quality of data that’s used to train/feed these models. The degree of hand-coding of features, manual data gathering, and labeling efforts have progressively reduced with advancements in deep neural networks, semi/weak supervision, reinforcement learning, and AutoML. Even so, getting good data for AI consistently, efficiently, and reliably to achieve trustworthy, high-value outcomes is a complex and crucially important challenge that requires much greater attention to approaches that can co-optimize the entire data lifecycle.

 Second, we need to be thinking beyond single AI applications to advanced scenarios where most software running on our systems is built to be AI-native (i.e., designed from ground-up to learn “programs” from data and experiences). Such a shift calls for a different kind of AI-first system and infrastructure design across edge to cloud with intrinsically co-optimized data and computation planes.

 Third, while huge strides have been both in terms of AI hardware and software frameworks, there’s an opportunity to revitalize native system software mechanisms to keep up with the intrinsic needs of AI, big data, and IoT in the Software 2.0 era. For example, the ubiquity of AI could be an inflection point to rethink system software and storage layers, departing from traditional assumptions to intelligently prioritize insight-rich data collectively for all AI applications, much like our brains automatically learn to sift meaningful information.

 

 Curt Hopkins
Hewlett Packard Enterprise

twitter.com/hpe_labs
linkedin.com/showcase/hewlett-packard-labs/
labs.hpe.com

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs