Showing 1–20 of 29 results
/ Date/ Name
Jun 1, 2023FlexRound: Learnable Rounding based on Element-wise Division for Post-Training QuantizationMay 23, 2023Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer QuantizationMay 21, 2025ReGUIDE: Data Efficient GUI Grounding via Spatial Reasoning and SearchOct 8, 2022AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language ModelsFeb 11, 2026Benchmarks Are Not That Out of Distribution: Word Overlap Predicts PerformanceFeb 4, 2025Peri-LN: Revisiting Normalization Layer in the Transformer ArchitectureSep 27, 2023Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language ModelsAug 21, 2025Exploiting Vocabulary Frequency Imbalance in Language Model Pre-trainingJan 5, 2026HyperCLOVA X 8B OmniAug 16, 2025Temporal Grounding as a Learning Signal for Referring Video Object SegmentationApr 2, 2024HyperCLOVA X Technical ReportApr 7, 2026What Models Know, How Well They Know It: Knowledge-Weighted Fine-Tuning for Learning When to Say "I Don't Know"Jun 20, 2022LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language ModelsJun 6, 2025Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language ModelsFeb 13, 2026Early-warning the compact-to-dendritic transition via spatiotemporal learning of two-dimensional growth imagesMay 31, 2021Towards a Federated Learning Framework for Heterogeneous Devices of Internet of ThingsJan 7, 2021OAAE: Adversarial Autoencoders for Novelty Detection in Multi-modal Normality Case via Orthogonalized Latent SpaceJun 11, 2024Improving Multi-hop Logical Reasoning in Knowledge Graphs with Context-Aware Query Representation LearningAug 15, 2023Over 30,000-fold field enhancement of terahertz nanoresonators enabled by rapid inverse designMar 6, 2026ReflexiCoder: Teaching Large Language Models to Self-Reflect on Generated Code and Self-Correct It via Reinforcement Learning