PNNL DINNER draws on ML to protect DOE supercomputers from illegitimate workloads

[ad_2][ad_1]
By Allan Brettman

Apparently, Ang Li and Kevin Barker are computer scientists. But appearances can be deceiving.

The Pacific Northwest National Laboratory (PNNL) duo is also a high-tech investigator, training powerful computers to perform gumshoe work while protecting the nation from cybersecurity threats.

Li, Barker, PNNL colleagues and university collaborators have developed a system to track down questionable use of high-performance computing systems (HPCs) within the United States Department of Energy (DOE). As HPC systems become more powerful, arguably the most sophisticated and largest in the world, they are potentially threatened by attackers trying to run malicious software.

Tracking down evil users is just one example of work at PNNL’s Center for Advanced Technology Evaluation (CENATE), a computer test bed supported by the DOE’s Office of Science. Overall, CENATE aims to understand the impact of advanced information technologies on scientific workloads. In this role, Barker, CENATE co-principal investigator David Manz and colleagues developed a non-intrusive profiling framework for judging the legitimacy of the HPC workload that is as stealthy as an undercover detective examining the scene through a mirror at two ways.

Removal of intruders

CENATE led the development of machine learning methods such as recurrent neural networks (RNNs) to classify the distinctive signatures of authorized and unauthorized workloads. With over 95 percent prediction accuracy, this open source framework can help system administrators identify and remove unauthorized and intruding workloads, ensuring system availability and integrity for legitimate scientific users.

NLP computer scientist Li

“Machine learning methods are helping us identify some of the key characteristics of workloads that represent legitimate scientific computing or something that may be anomalous that we would like to look into further,” said Barker, PNNL computer scientist and principal investigator at CENATE. “The machine learning algorithm can learn these patterns by looking at legitimate scientific codes.

“The machine learning algorithm can learn some of these patterns and then learn to distinguish between legitimate code, something we expect to see, from something that seems strange to us, something we would like to signal for a human operator to look into further.”

Fingerprint detection

As you might suspect, there is no gigantic dataset that can be loaded into a supercomputer named “Catch the Bad Cyber ​​Guys”.

In place of available information based on actual nefarious activities, the researchers created a dataset that reflected known and disallowed characteristics, said Li, a NLP computer scientist. “We have identified codes that could be executed by illegitimate users,” Li said.

Li and colleagues drew on publicly available data from sources like GitHub, GitLab, and Bitbucket to create their own smallest dataset, to identify cybersecurity abuse fingerprints such as cryptocurrency applications, password cracking activities, or downtime. running the computer longer than usual.

Trust but verify

Malicious codes sneak into the world of cryptocurrency mining and password cracking, and RNNs are on the lookout for suspicious behavior. How much data is moved from a central processing unit (CPU) to a graphics processing unit (GPU)? What is the power consumption of the GPU memory? Eventually, CENATE computer scientists expect to have a vastly improved RNN that can expand the potential for finding anomalous clues in the underworld of malicious code.

Li and Barker, along with colleagues Pengfei Zou and Rong Ge of Clemson University, published a paper in 2019 for the IEEE International Workshop / Symposium on Workload Characterization that described how machine learning through real-time RNN could detect use high performance computer offense.

Abusing a supercomputing ability presents several problems, they reported. Not only does it deprive mission-critical and scientific applications of execution cycles, it also increases the possibility for attackers to steal data, damage systems, and take advantage of high compute and network bandwidth to attack other sites.

Potential users of DOE supercomputing resources undergo a high level of scrutiny before gaining access to some of the world’s most sophisticated equipment. That starter comes with a level of trust, Barker said, which poses a challenge to rogue computer detection.

“Once a user has been approved, we trust them to know what they are doing,” said Barker. “So having these kinds of automated machine learning tools can help facility operators bridge that trust gap, know in real time if a user isn’t doing what they should.”

[ad_2]Source link