Nick Czuzoj-Shulman, David Yu, Christopher Boucher, Luke Bornn, Mehrsan Javan
This paper takes a different approach to evaluating face-offs in ice hockey. Instead of looking at win percentages, the de facto measure of successful face-off takers for decades, focuses on the game events following the face-off and how directionality, clean wins, and player handedness play a significant role in creating value. This will demonstrate how not all face-off wins are made equal: some players consistently create post-face-off value through clean wins and by directing the puck to high-value areas of the ice. As a result, we propose an expected events face-off model as well as a wins above expected model that take into account the value added on a face-off by targeting the puck to specific areas on the ice in various contexts, as well as the impact this has on subsequent game events.
David Yu, Christopher Boucher, Luke Bornn, Mehrsan Javan
Pace of play is an important characteristic in hockey as well as other team sports. We provide the first comprehensive study of pace within the sport of hockey, focusing on how teams and players impact pace in different regions of the ice, and the resultant effect on other aspects of the game. First we examined how pace of play varies across the surface of the rink, across different periods, at different manpower situations, between different professional leagues, and through time between seasons. Our analysis of pace by zone helps to explain some of the counter-intuitive results reported in prior studies. For instance, we show that the negative correlation between attacking speed and shots/goals is likely due to a large decline in attacking speed in the OZ. We also studied how pace impacts the outcomes of various events. We found that pace is positively-correlated with both high-danger zone entries (e.g. odd-man rushes) and higher shot quality. However, we find that passes with failed receptions occur at higher speeds than successful receptions. These findings suggest that increased pace is beneficial, but perhaps only up to a certain extent. Higher pace can create breakdowns in defensive structure and lead to better scoring chances but can also lead to more turnovers. Finally, we analyzed team and player-level pace in the NHL, highlighting the considerable variability in how teams and players attack and defend against pace. Taken together, our results demonstrate that measures of team-level pace derived from spatio-temporal data are informative metrics in hockey and should prove useful in other team sports.
Ryan Wong, Hosea David Yu Fei Ng, Dhananjai Sharma, Glenn Jun Jie Ng, Kavishvaran Srinivasan
Large Language Models (LLMs) remain susceptible to jailbreak exploits that bypass safety filters and induce harmful or unethical behavior. This work presents a systematic taxonomy of existing jailbreak defenses across prompt-level, model-level, and training-time interventions, followed by three proposed defense strategies. First, a Prompt-Level Defense Framework detects and neutralizes adversarial inputs through sanitization, paraphrasing, and adaptive system guarding. Second, a Logit-Based Steering Defense reinforces refusal behavior through inference-time vector steering in safety-sensitive layers. Third, a Domain-Specific Agent Defense employs the MetaGPT framework to enforce structured, role-based collaboration and domain adherence. Experiments on benchmark datasets show substantial reductions in attack success rate, achieving full mitigation under the agent-based defense. Overall, this study highlights how jailbreaks pose a significant security threat to LLMs and identifies key intervention points for prevention, while noting that defense strategies often involve trade-offs between safety, performance, and scalability. Code is available at: https://github.com/Kuro0911/CS5446-Project
David Yu, Andy Xiao
Legacy AD/ADAS development from OEMs centers around developing functions on ECUs using services provided by AUTOSAR Classic Platform (CP) to meet automotive-grade and mass-production requirements. The AUTOSAR CP couples hardware and software components statically and encounters challenges to provide sufficient capacities for the processing of high-level intelligent driving functions, whereas the new platform, AUTOSAR Adaptive Platform (AP) is designed to support dynamically communication and provide richer services and function abstractions for those resource-intensive (memory, CPU) applications. Yet for both platforms, application development and the supporting system software are still closely coupled together, and this makes application development and the enhancement less scalable and flexible, resulting in longer development cycles and slower time-to-market. This paper presents a multi-layered, service-oriented intelligent driving operating system foundation (we named it as Digital Foundation Platform) that provides abstractions for easier adoption of heterogeneous computing hardware. It features a multi-layer SOA software architecture with each layer providing adaptive service API at north-bound for application developers. The proposed Digital Foundation Platform (DFP) has significant advantages of decoupling hardware, operating system core, middle-ware, functional software and application software development. It provides SOA at multiple layers and enables application developers from OEMs, to customize and develop new applications or enhance existing applications with new features, either in autonomous domain or intelligent cockpit domain, with great agility, and less code through re-usability, and thus reduce the time-to-market.
Junbo Peng, Yuan Gao, Chih-Wei Chang, Richard Qiu, Tonghe Wang, Aparna Kesarwala, Kailin Yang, Jacob Scott, David Yu, Xiaofeng Yang
Background: Cone-beam computed tomography (CBCT) scans, performed fractionally (e.g., daily or weekly), are widely utilized for patient alignment in the image-guided radiotherapy (IGRT) process, thereby making it a potential imaging modality for the implementation of adaptive radiotherapy (ART) protocols. Nonetheless, significant artifacts and incorrect Hounsfield unit (HU) values hinder their application in quantitative tasks such as target and organ segmentations and dose calculation. Therefore, acquiring CT-quality images from the CBCT scans is essential to implement online ART in clinical settings. Purpose: This work aims to develop an unsupervised learning method using the patient-specific diffusion model for CBCT-based synthetic CT (sCT) generation to improve the image quality of CBCT. Methods: The proposed method is in an unsupervised framework that utilizes a patient-specific score-based model as the image prior alongside a customized total variation (TV) regularization to enforce coherence across different transverse slices. The score-based model is unconditionally trained using the same patient's planning CT (pCT) images to characterize the manifold of CT-quality images and capture the unique anatomical information of the specific patient. The efficacy of the proposed method was assessed on images from anatomical sites including head and neck (H&N) cancer, pancreatic cancer, and lung cancer. The performance of the proposed CBCT correction method was evaluated using quantitative metrics including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC). Additionally, the proposed algorithm was benchmarked against two other unsupervised diffusion model-based CBCT correction algorithms.
Ahmal Jawad Zafar, Xiaofeng Yang, Zachary Diamond, Tian Sibo, David Yu, Pretesh R. Patel, Jun Zhou
In this paper, we proposed a method to optimize adaptive proton FLASH therapy (ADP FLASH) using modularized pin ridge filters (pRFs) by recycling module pins from the initial plan while reducing pRF adjustments in adaptive FLASH planning. Initially, single energy (250 MeV) FLASH pRF plans were created using pencil beam directions (PBDs) from initial IMPT plans on the planning CT (pCT). PBDs are classified as new/changed ($Δ$E > > 5 MeV) or unchanged by comparing spot maps for targets between pCT and re-CT. We used an iterative least square regression model to identify recyclable PBDs with minimal relative changes to spot MU weighting. Two PBDs with the least square error were retrieved per iteration and added to the background plan, and the remaining PBDs were reoptimized for the adaptive plan in subsequent iterations. The method was validated on three liver SBRT cases (50 Gy in 5 fractions) by comparing various dosimetric parameters across initial pRF plans on pCT, reCT and the ADP FLASH pRF plans on reCT. V100 for initial pRF plans on pCT, reCT, and ADP FLASH pRF plans for the three cases were as follows: (93.7%, 89.2%, 91.4%), (93.5%, 60.2%, 91.7%), (97.3%, 69.9%, 98.8%). We observe a decline in plan quality when applying the initial pRF to the reCT, whereas the ADP FLASH pRF approach restores quality comparable to the initial pRF on the pCT. FLASH effect of the initial pRF and ADP pRF plans were evaluated with a dose and dose rate threshold of 1Gy and 40Gy/s, respectively, using the FLASH effectiveness model. The proposed method recycled 91.2%, 71%, and 64.7% of PBDs from initial pRF plans for the three cases while maintaining all clinical goals and preserving FLASH effects across all cases.
Zach Eidex, Mojtaba Safari, Jie Ding, Richard Qiu, Justin Roper, David Yu, Hui-Kuo Shu, Zhen Tian, Hui Mao, Xiaofeng Yang
Objective: Gadolinium-based contrast agents (GBCAs) are commonly employed with T1w MRI to enhance lesion visualization but are restricted in patients at risk of nephrogenic systemic fibrosis and variations in GBCA administration can introduce imaging inconsistencies. This study develops an efficient 3D deep-learning framework to generate T1-contrast enhanced images (T1C) from pre-contrast multiparametric MRI. Approach: We propose the 3D latent rectified flow (T1C-RFlow) model for generating high-quality T1C images. First, T1w and T2-FLAIR images are input into a pretrained autoencoder to acquire an efficient latent space representation. A rectified flow diffusion model is then trained in this latent space representation. The T1C-RFlow model was trained on a curated dataset comprised of the BraTS 2024 glioma (GLI; 1480 patients), meningioma (MEN; 1141 patients), and metastases (MET; 1475 patients) datasets. Selected patients were split into train (N=2860), validation (N=612), and test (N=614) sets. Results: Both qualitative and quantitative results demonstrate that the T1C-RFlow model outperforms benchmark 3D models (pix2pix, DDPM, Diffusion Transformers (DiT-3D)) trained in the same latent space. T1C-RFlow achieved the following metrics - GLI: NMSE 0.044 +/- 0.047, SSIM 0.935 +/- 0.025; MEN: NMSE 0.046 +/- 0.029, SSIM 0.937 +/- 0.021; MET: NMSE 0.098 +/- 0.088, SSIM 0.905 +/- 0.082. T1C-RFlow had the best tumor reconstruction performance and significantly faster denoising times (6.9 s/volume, 200 steps) than conventional DDPM models in both latent space (37.7s, 1000 steps) and patch-based in image space (4.3 hr/volume). Significance: Our proposed method generates synthetic T1C images that closely resemble ground truth T1C in much less time than previous diffusion models. Further development may permit a practical method for contrast-agent-free MRI for brain tumors.
Chiara Arina, Benjamin Fuks, Luca Panizzi, Michael J. Baker, Alan S. Cornell, Jan Heisig, Benedikt Maier, Rute Pedro, Dominique Trischuk, Diyar Agin, Alexandre Arbey, Giorgio Arcadi, Emanuele Bagnaschi, Kehang Bai, Disha Bhatia, Mathias Becker, Alexander Belyaev, Ferdinand Benoit, Monika Blanke, Jackson Burzynski, Jonathan M. Butterworth, Antimo Cagnotta, Lorenzo Calibbi, Linda M. Carpenter, Xabier Cid Vidal, Emanuele Copello, Louie Corpe, Francesco D'Eramo, Aldo Deandrea, Aman Desai, Caterina Doglioni, Sunil M. Dogra, Mathias Garny, Mark D. Goodsell, Sohaib Hassan, Philip Coleman Harris, Julia Harz, Alejandro Ibarra, Alberto Orso Maria Iorio, Felix Kahlhoefer, Deepak Kar, Shaaban Khalil, Valery Khoze, Pyungwon Ko, Sabine Kraml, Greg Landsberg, Andre Lessa, Laura Lopez-Honorez, Alberto Mariotti, Vasiliki A. Mitsou, Kirtimaan Mohan, Chang-Seong Moon, Alexander Moreno Briceño, María Moreno Llácer, Léandre Munoz-Aillaud, Taylor Murphy, Anele M. Ncube, Wandile Nzuza, Clarisse Prat, Lena Rathmann, Thobani Sangweni, Dipan Sengupta, William Shepherd, Sukanya Sinha, Tim M. P. Tait, Andrea Thamm, Michel H. G. Tytgat, Zirui Wang, David Yu, Shin-Shan Yu
This report, summarising work achieved in the context of the LHC Dark Matter Working Group, investigates the phenomenology of $t$-channel dark matter models, spanning minimal setups with a single dark matter candidate and mediator to more complex constructions closer to UV-complete models. For each considered class of models, we examine collider, cosmological and astrophysical implications. In addition, we explore scenarios with either promptly decaying or long-lived particles, as well as featuring diverse dark matter production mechanisms in the early universe. By providing a unified analysis framework, numerical tools and guidelines, this work aims to support future experimental and theoretical efforts in exploring $t$-channel dark matter models at colliders and in cosmology.
Chester Palen-Michel, Ruixiang Wang, Yipeng Zhang, David Yu, Canran Xu, Zhe Wu
The emergence of Large Language Models (LLMs) has revolutionized natural language processing in various applications especially in e-commerce. One crucial step before the application of such LLMs in these fields is to understand and compare the performance in different use cases in such tasks. This paper explored the efficacy of LLMs in the e-commerce domain, focusing on instruction-tuning an open source LLM model with public e-commerce datasets of varying sizes and comparing the performance with the conventional models prevalent in industrial applications. We conducted a comprehensive comparison between LLMs and traditional pre-trained language models across specific tasks intrinsic to the e-commerce domain, namely classification, generation, summarization, and named entity recognition (NER). Furthermore, we examined the effectiveness of the current niche industrial application of very large LLM, using in-context learning, in e-commerce specific tasks. Our findings indicate that few-shot inference with very large LLMs often does not outperform fine-tuning smaller pre-trained models, underscoring the importance of task-specific model optimization.Additionally, we investigated different training methodologies such as single-task training, mixed-task training, and LoRA merging both within domain/tasks and between different tasks. Through rigorous experimentation and analysis, this paper offers valuable insights into the potential effectiveness of LLMs to advance natural language processing capabilities within the e-commerce industry.
Chaoqiong Ma, Xiaofeng Yang, Yinan Wang, David Yu, Pretesh Patel, Jun Zhou
We previously developed a FLASH planning framework for streamlined pin-ridge-filter (pin-RF) design, demonstrating its feasibility for single-energy proton FLASH planning. In this study, we refined the pin-RF design for easy assembly using reusable modules, focusing on its application in liver SABR. This framework generates an intermediate IMPT plan and translates it into step widths and thicknesses of pin-RFs for a single-energy FLASH plan. Parameters like energy spacing, monitor unit limit, and spot quantity were adjusted during IMPT planning, resulting in pin-RFs assembled using predefined modules with widths from 1 to 6 mm, each with a WET of 5 mm. This approach was validated on three liver SABR cases. FLASH doses, quantified using the FLASH effectiveness model at 1 to 5 Gy thresholds, were compared to conventional IMPT (IMPT-CONV) doses to assess clinical benefits. The highest demand for 6 mm width modules, moderate for 2-4 mm, and minimal for 1- and 5-mm modules were shown across all cases. At lower dose thresholds, the two-beam case showed significant dose reductions (>23%), while the other two three-beam cases showed moderate reductions (up to 14.7%), indicating the need for higher fractional beam doses for an enhanced FLASH effect. Positive clinical benefits were seen only in the two-beam case at the 5 Gy threshold. At the 1 Gy threshold, the FLASH plan of the two-beam case outperformed its IMPT-CONV plan, reducing dose indicators by up to 28.3%. However, the three-beam cases showed negative clinical benefits at the 1 Gy threshold, with some dose indicators increasing by up to 16% due to lower fractional beam doses and closer beam arrangements. This study evaluated the feasibility of modularizing streamlined pin-RFs in single-energy proton FLASH planning for liver SABR, offering guidance on optimal module composition and strategies to enhance FLASH planning.
Andreas Albert, Antonio Boveia, Oleg Brandt, Eric Corrigan, Zeynep Demiragli, Caterina Doglioni, Etienne Dreyer, Boyu Gao, Josh Greaves, Ulrich Haisch, Philip Harris, Greg Landsberg, Alexander Moreno, Katherine Pachal, Priscilla Pani, Federica Piazza, Tim M. P. Tait, David Yu, Felix Yu, Lian-Tao Wang
The search for dark matter is one of the main science drivers of the particle and astroparticle physics communities. Determining the nature of dark matter will require a broad approach, with a range of experiments pursuing different experimental hypotheses. Within this search program, collider experiments provide insights on dark matter which are complementary to direct/indirect detection experiments and to astrophysical evidence. To compare results from a wide variety of experiments, a common theoretical framework is required. The ATLAS and CMS experiments have adopted a set of simplified models which introduce two new particles, a dark matter particle and a mediator, and whose interaction strengths are set by the couplings of the mediator. So far, the presentation of LHC and future hadron collider results has focused on four benchmark scenarios with specific coupling values within these simplified models. In this work, we describe ways to extend those four benchmark scenarios to arbitrary couplings, and release the corresponding code for use in further studies. This will allow for more straightforward comparison of collider searches to accelerator experiments that are sensitive to smaller couplings, such as those for the US Community Study on the Future of Particle Physics (Snowmass 2021), and will give a more complete picture of the coupling dependence of dark matter collider searches when compared to direct and indirect detection searches. By using semi-analytical methods to rescale collider limits, we drastically reduce the computing resources needed relative to traditional approaches based on the generation of additional simulated signal samples.
Mulugeta Weldezgina Asres, Christian Walter Omlin, Long Wang, David Yu, Pavel Parygin, Jay Dittmann, Georgia Karapostoli, Markus Seidel, Rosamaria Venditti, Luka Lambrecht, Emanuele Usai, Muhammad Ahmad, Javier Fernandez Menendez, Kaori Maeshima, the CMS-HCAL Collaboration
The Compact Muon Solenoid (CMS) experiment is a general-purpose detector for high-energy collision at the Large Hadron Collider (LHC) at CERN. It employs an online data quality monitoring (DQM) system to promptly spot and diagnose particle data acquisition problems to avoid data quality loss. In this study, we present a semi-supervised spatio-temporal anomaly detection (AD) monitoring system for the physics particle reading channels of the Hadron Calorimeter (HCAL) of the CMS using three-dimensional digi-occupancy map data of the DQM. We propose the GraphSTAD system, which employs convolutional and graph neural networks to learn local spatial characteristics induced by particles traversing the detector and the global behavior owing to shared backend circuit connections and housing boxes of the channels, respectively. Recurrent neural networks capture the temporal evolution of the extracted spatial features. We validate the accuracy of the proposed AD system in capturing diverse channel fault types using the LHC collision data sets. The GraphSTAD system achieves production-level accuracy and is being integrated into the CMS core production system for real-time monitoring of the HCAL. We provide a quantitative performance comparison with alternative benchmark models to demonstrate the promising leverage of the presented system. Code: https://github.com/muleina/CMS_HCAL_ML_OnlineDQM .
Robert Gardner, Simone Pagan Griso, Stefan Hoeche, Karol Krizka, Fabio Maltoni, Andrew Melo, Meenakshi Narain, Isabel Ojalvo, Pascal Paschos, Laura Reina, Michael Schmitt, Horst Severini, Giordon Stark, John Stupak, Thiago Tomei, Alessandro Tricoli, David Yu
A description of Standard Model background Monte Carlo samples produced for studies related to future hadron colliders.
Jessica Ji, David Yu
This paper examines the merger and acquisition (M&A) process between COFCO and Mengniu Dairy, exploring the motivations behind this strategic move and identifying its key aspects. By analyzing both the financial and non-financial contributions of Mengniu Dairy to COFCO, this study provides valuable insights and references for future corporate M&A activities. The theoretical significance of this research lies in its focus on the relatively underexplored area of M&A within the dairy industry, particularly in terms of M&A contributions. Using the COFCO-Mengniu case as a model, the study broadens current research perspectives by assessing the impact of M&A from financial and non-financial standpoints, enriching the body of literature on dairy industry M&As.
Tulika Bose, Antonio Boveia, Caterina Doglioni, Simone Pagan Griso, James Hirschauer, Elliot Lipeles, Zhen Liu, Nausheen R. Shah, Lian-Tao Wang, Kaustubh Agashe, Juliette Alimena, Sebastian Baum, Mohamed Berkat, Kevin Black, Gwen Gardner, Tony Gherghetta, Josh Greaves, Maxx Haehn, Phil C. Harris, Robert Harris, Julie Hogan, Suneth Jayawardana, Abraham Kahn, Jan Kalinowski, Simon Knapen, Ian M. Lewis, Meenakshi Narain, Katherine Pachal, Matthew Reece, Laura Reina, Tania Robens, Alessandro Tricoli, Carlos E. M. Wagner, Riley Xu, Felix Yu, Filip Zarnecki, Amin Aboubrahim, Andreas Albert, Michael Albrow, Wolfgang Altmannshofer, Gerard Andonian, Artur Apresyan, Kétévi Adikle Assamagan, Patrizia Azzi, Howard Baer, Michael J. Baker, Avik Banerjee, Vernon Barger, Brian Batell, Martin Bauer, Hugues Beauchesne, Samuel Bein, Alexander Belyaev, Ankit Beniwal, Mikael Berggren, Prudhvi N. Bhattiprolu, Nikita Blinov, Alain Blondel, Oleg Brandt, Giacomo Cacciapaglia, Rodolfo Capdevilla, Marcela Carena, Cesare Cazzaniga, Francesco Giovanni Celiberto, Cari Cesarotti, Sergei V. Chekanov, Hsin-Chia Cheng, Thomas Y. Chen, Yuze Chen, R. Sekhar Chivukula, Matthew Citron, James Cline, Tim Cohen, Jack H. Collins, Eric Corrigan, Nathaniel Craig, Daniel Craik, Andreas Crivellin, David Curtin, Smita Darmora, Arindam Das, Sridhara Dasu, Annapaola de Cosa, Aldo Deandrea, Antonio Delgado, Zeynep Demiragli, David d'Enterria, Frank F. Deppisch, Radovan Dermisek, Nishita Desai, Abhay Deshpande, Jordy de Vries, Jennet Dickinson, Keith R. Dienes, Karri Folan Di Petrillo, Matthew J. Dolan, Peter Dong, Patrick Draper, Marco Drewes, Etienne Dreyer, Peizhi Du, Florian Eble, Majid Ekhterachian, Motoi Endo, Rouven Essig, Jesse N. Farr, Farida Fassi, Jonathan L. Feng, Gabriele Ferretti, Daniele Filipetto, Thomas Flacke, Karri Folan Di Petrillo, Roberto Franceschini, Diogo Buarque Franzosi, Keisuke Fujii, Benjamin Fuks, Sri Aditya Gadam, Boyu Gao, Aran Garcia-Bellido, Isabel Garcia Garcia, Maria Vittoria Garzelli, Stephen Gedney, Marie-Hélène Genest, Tathagata Ghosh, Mark Golkowski, Giovanni Grilli di Cortona, Emine Gurpinar Guler, Yalcin Guler, C. Guo, Nate Graf, Ulrich Haisch, Jan Hajer, Koichi Hamaguchi, Tao Han, Philip Harris, Sven Heinemeyer, Christopher S. Hill, Joshua Hiltbrand, Tova Ray Holmes, Samuel Homiller, Sungwoo Hong, Walter Hopkins, Shih-Chieh Hsu, Phil Ilten, Wasikul Islam, Sho Iwamoto, Daniel Jeans, Laura Jeanty, Haoyi Jia, Sergo Jindariani, Daniel Johnson, Felix Kahlhoefer, Yonatan Kahn, Paul Karchin, Thomas Katsouleas, Shin-ichi Kawada, Junichiro Kawamura, Chris Kelso, Elham E Khoda, Valery Khoze, Doojin Kim, Teppei Kitahara, Juraj Klaric, Michael Klasen, Kyoungchul Kong, Wojciech Kotlarski, Ashutosh V. Kotwal, Jonathan Kozaczuk, Richard Kriske, Suchita Kulkarni, Jason Kumar, Manuel Kunkel, Greg Landsberg, Kenneth Lane, Clemens Lange, Lawrence Lee, Jiajun Liao, Benjamin Lillard, Lingfeng Li, Shuailong Li, Shu Li, Jenny List, Tong Li, Hongkai Liu, Jia Liu, Jonathan D Long, Enrico Lunghi, Kun-Feng Lyu, Danny Marfatia, Dakotah Martinez, Stephen P. Martin, Navin McGinnis, Karrick McGinty, Krzysztof Mękała, Federico Meloni, Oleksii Mikulenko, Ming Huang, Rashmish K. Mishra, Manimala Mitra, Vasiliki A. Mitsou, Chang-Seong Moon, Alexander Moreno, Takeo Moroi, Gerard Mourou, Malte Mrowietz, Patric Muggli, Jurina Nakajima, Pran Nath, J. Nelson, Matthias Neubert, Laura Nosler, Maria Teresa Núñez Pardo de Vera, Nobuchika Okada, Satomi Okada, Vitalii A. Okorokov, Yasar Onel, Tong Ou, Maksym Ovchynnikov, Rojalin Padhan, Priscilla Pani, Luca Panizzi, Andreas Papaefstathiou, Kevin Pedro, Cristián Peña, Federica Piazza, James Pinfold, Deborah Pinna, Werner Porod, Chris Potter, Markus Tobias Prim, Stefano Profumo, James Proudfoot, Mudit Rai, Filip Rajec, Reese Ramos, Michael J. Ramsey-Musolf, Javier Resta-Lopez, Jürgen Reuter, Andreas Ringwald, Chiara Rizzi, Thomas G. Rizzo, Giancarlo Rossi, Richard Ruiz, L. Rygaard, Aakash A. Sahai, Shadman Salam, Pearl Sandick, Deepak Sathyan, Christiane Scherb, Pedro Schwaller, Leonard Schwarze, Pat Scott, Sezen Sekmen, Dibyashree Sengupta, S. Sen, Anna Sfyrla, Eric Shackelford, T. Sharma, Varun Sharma, Jessie Shelton, William Shepherd, Seodong Shin, Elizabeth H. Simmons, Zoie Sloneker, Carlos Vázquez Sierra, Torbjörn Sjöstrand, Scott Snyder, Huayang Song, Giordon Stark, Patrick Stengel, Joachim Stohr, Daniel Stolarski, Matt Strassler, Nadja Strobbe, Julia Gonski, Rebeca Gonzalez Suarez, Taikan Suehara, Shufang Su, Wei Su, Raza M. Syed, Tim M. P. Tait, Toshiki Tajima, Andy Tang, Xerxes Tata, Teodor Tchalokov, Andrea Thamm, Brooks Thomas, Natalia Toro, Nhan V. Tran, Loan Truong, Yu-Dai Tsai, Eva Tuecke, Nikhilesh Venkatasubramanian, Chris B. Verhaaren, Carl Vuosalo, Xiao-Ping Wang, Xing Wang, Yikun Wang, Zhen Wang, Christian Weber, Glen White, Martin White, Anthony G. Williams, Brady Williams, Mike Williams, Stephane Willocq, Alex Woodcock, Yongcheng Wu, Ke-Pan Xie, Keping Xie, Si Xie, C. -H. Yeh, Ryo Yonamine, David Yu, S. -S. Yu, Mohamed Zaazoua, Aleksander Filip Żarnecki, Kamil Zembaczynski, Danyi Zhang, Jinlong Zhang, Frank Zimmermann, Jose Zurita
Mulugeta Weldezgina Asres, Christian Walter Omlin, Jay Dittmann, Pavel Parygin, Joshua Hiltbrand, Seth I. Cooper, Grace Cummings, David Yu
Identifying outlier behavior among sensors and subsystems is essential for discovering faults and facilitating diagnostics in large systems. At the same time, exploring large systems with numerous multivariate data sets is challenging. This study presents a lightweight interconnection and divergence discovery mechanism (LIDD) to identify abnormal behavior in multi-system environments. The approach employs a multivariate analysis technique that first estimates the similarity heatmaps among the sensors for each system and then applies information retrieval algorithms to provide relevant multi-level interconnection and discrepancy details. Our experiment on the readout systems of the Hadron Calorimeter of the Compact Muon Solenoid (CMS) experiment at CERN demonstrates the effectiveness of the proposed method. Our approach clusters readout systems and their sensors consistent with the expected calorimeter interconnection configurations, while capturing unusual behavior in divergent clusters and estimating their root causes.
Mulugeta Weldezgina Asres, Christian Walter Omlin, Long Wang, Pavel Parygin, David Yu, Jay Dittmann, The CMS-HCAL Collaboration
The proliferation of sensors brings an immense volume of spatio-temporal (ST) data in many domains, including monitoring, diagnostics, and prognostics applications. Data curation is a time-consuming process for a large volume of data, making it challenging and expensive to deploy data analytics platforms in new environments. Transfer learning (TL) mechanisms promise to mitigate data sparsity and model complexity by utilizing pre-trained models for a new task. Despite the triumph of TL in fields like computer vision and natural language processing, efforts on complex ST models for anomaly detection (AD) applications are limited. In this study, we present the potential of TL within the context of high-dimensional ST AD with a hybrid autoencoder architecture, incorporating convolutional, graph, and recurrent neural networks. Motivated by the need for improved model accuracy and robustness, particularly in scenarios with limited training data on systems with thousands of sensors, this research investigates the transferability of models trained on different sections of the Hadron Calorimeter of the Compact Muon Solenoid experiment at CERN. The key contributions of the study include exploring TL's potential and limitations within the context of encoder and decoder networks, revealing insights into model initialization and training configurations that enhance performance while substantially reducing trainable parameters and mitigating data contamination effects. Code: https://github.com/muleina/CMS\_HCAL\_ML\_OnlineDQM .
Xiaozhou Liang, John Henry Burns, Joseph Sanchez, Karthik Dantu, Lukasz Ziarek, Yu David Liu
Unmanned Aerial Vehicles (UAVs) are an emerging computation platform known for their safety-critical need. In this paper, we conduct an empirical study on a widely used open-source UAV software framework, Paparazzi, with the goal of understanding the safety-critical concerns of UAV software from a bottom-up developer-in-the-field perspective. We set our focus on the use of Bounding Functions (BFs), the runtime checks injected by Paparazzi developers on the range of variables. Through an in-depth analysis on BFs in the Paparazzi autopilot software, we found a large number of them (109 instances) are used to bound safety-critical variables essential to the cyber-physical nature of the UAV, such as its thrust, its speed, and its sensor values. The novel contributions of this study are two fold. First, we take a static approach to classify all BF instances, presenting a novel datatype-based 5-category taxonomy with fine-grained insight on the role of BFs in ensuring the safety of UAV systems. Second, we dynamically evaluate the impact of the BF uses through a differential approach, establishing the UAV behavioral difference with and without BFs. The two-pronged static and dynamic approach together illuminates a rarely studied design space of safety-critical UAV software systems.
Yiannis Gkoufas, David Yu Yuan
Bioinformatics pipelines depend on shared POSIX filesystems for its input, output and intermediate data storage. Containerization makes it more difficult for the workloads to access the shared file systems. In our previous study, we were able to run both ML and non-ML pipelines on Kubeflow successfully. However, the storage solutions were complex and less optimal. This is because there are no established resource types to represent the concept of data source on Kubernetes. More and more applications are running on Kubernetes for batch processing. End users are burdened with configuring and optimising the data access, which is what we have experienced before. In this article, we are introducing a new concept of Dataset and its corresponding resource as a native Kubernetes object. We have leveraged the Dataset Lifecycle Framework which takes care of all the low-level details about data access in Kubernetes pods. Its pluggable architecture is designed for the development of caching, scheduling and governance plugins. Together, they manage the entire lifecycle of the custom resource Dataset. We use Dataset Lifecycle Framework to serve data from object stores to both ML and non-ML pipelines running on Kubeflow. With DLF, we make training data fed into ML models directly without being downloaded to the local disks, which makes the input scalable. We have enhanced the durability of training metadata by storing it into a dataset, which also simplifies the set up of the Tensorboard, separated from the notebook server. For the non-ML pipeline, we have simplified the 1000 Genome Project pipeline with datasets injected into the pipeline dynamically. In addition, our preliminary results indicate that the pluggable caching mechanism can improve the performance significantly.
Chih-Wei Chang, Zhen Tian, Richard L. J. Qiu, H. Scott McGinnis, Duncan Bohannon, Pretesh Patel, Yinan Wang, David S. Yu, Sagar A. Patel, Jun Zhou, Xiaofeng Yang
This study aims to develop a digital twin (DT) framework to enhance adaptive proton stereotactic body radiation therapy (SBRT) for prostate cancer. Prostate SBRT has emerged as a leading option for external beam radiotherapy due to its effectiveness and reduced treatment duration. However, interfractional anatomy variations can impact treatment outcomes. This study seeks to address these uncertainties using DT concept, with the goal of improving treatment quality, potentially revolutionizing prostate radiotherapy to offer personalized treatment solutions. Our study presented a pioneering approach that leverages DT technology to enhance adaptive proton SBRT. The framework improves treatment plans by utilizing patient-specific CTV setup uncertainty, which is usually smaller than conventional clinical setups. This research contributes to the ongoing efforts to enhance the efficiency and efficacy of prostate radiotherapy, with ultimate goals of improving patient outcomes and life quality.