Ahmed Hussain, Ahmed Sultan, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
Near-field beamforming enables target discrimination in both range (axial) and angle (lateral) dimensions. Elevated sidelobes along either dimension, however, increase susceptibility to interference and degrade detection performance. Conventional amplitude tapering techniques, designed for far-field scenarios, cannot simultaneously suppress axial and lateral sidelobes in near-field. In this letter, we propose a Slepian-based amplitude tapering approach that maximizes mainlobe energy concentration, achieving significant sidelobe reduction in both dimensions. Numerical results show that the proposed taper improves peak sidelobe suppression by approximately 24 dB in the lateral domain and 10 dB in the axial domain compared to a conventional uniform window.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Emil Björnson, Ahmed M. Eltawil
Future wireless networks, deploying thousands of antenna elements, may operate in the radiative near-field (NF), enabling spatial multiplexing across both angle and range domains. Sparse arrays have the potential to achieve comparable performance with fewer antenna elements. However, fixed sparse array designs are generally suboptimal under dynamic user distributions, while movable antenna architectures rely on mechanically reconfigurable elements, introducing latency and increased hardware complexity. To address these limitations, we propose a reconfigurable array thinning approach that selectively activates a subset of antennas to form a flexible sparse array design without physical repositioning. We first analyze grating lobes for uniform sparse arrays in the angle and range domains, showing their absence along the range dimension. Based on the analysis, we develop two particle swarm optimization-based strategies: a grating-lobe-based thinned array (GTA) for grating- lobe suppression and a sum-rate-based thinned array (STA) for multiuser sum-rate maximization. Simulation results demonstrate that GTA outperforms conventional uniform sparse arrays, while STA achieves performance comparable to movable antennas, thereby offering a practical and efficient array deployment strategy without the associated mechanical complexity.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
Recent studies suggest that uniform circular arrays (UCAs) can extend the angular coverage of the radiative near field region. This work investigates whether such enhanced angular coverage translates into improved spatial multiplexing performance when compared to uniform linear arrays (ULAs). To more accurately delineate the effective near field region, we introduce the effective beamfocusing Rayleigh distance (EBRD), an angle dependent metric that bounds the spatial region where beamfocusing remains effective. Closed form expressions for both beamdepth and EBRD are derived for UCAs. Our analysis shows that, under a fixed antenna element count, ULAs achieve narrower beamdepth and a longer EBRD than UCAs. Conversely, under a fixed aperture length, UCAs provide slightly narrower beamdepth and a marginally longer EBRD. Simulation results further confirm that ULAs achieve higher sum rate under the fixed element constraint, while UCAs offer marginal performance gain under the fixed aperture constraint.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
With the deployment of large antenna arrays at high-frequency bands, future wireless communication systems are likely to operate in the radiative near-field (NF). Unlike far-field beam steering, NF beams can be focused on a spatial region with finite depth, enabling user multiplexing in both range and angle. In NF multiuser multiple-input multiple-output (MU-MIMO) systems, achievable rates are limited by interference arising from sidelobes in both the axial (range) and lateral (angle) dimensions. This work investigates how axial sidelobes (ASLs) vary with array geometry. Closed-form array gain expressions are derived to characterize ASLs for uniform planar arrays. Analytical results show that the uniform square array (USA) yields the lowest ASLs, followed by the uniform concentric circular array (UCCA), uniform linear array (ULA), and uniform circular array (UCA). Specifically, the USA achieves a peak sidelobe level (PSLL) of -17.6 dB versus -7.9 dB for the UCA. Numerical simulations confirm that the USA provides superior sidelobe suppression and highest sumrate performance.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
Ultra-massive multiple-input multiple-output (UM-MIMO) technology is a key enabler for 6G networks, offering exceptional high data rates in millimeter-wave (mmWave) and Terahertz (THz) frequency bands. The deployment of large antenna arrays at high frequencies transitions wireless communication into the radiative near-field, where precise beam alignment becomes essential for accurate channel estimation. Unlike far-field systems, which rely on angular domain only, near-field necessitates beam search across both angle and distance dimensions, leading to substantially higher training overhead. To address this challenge, we propose a discrete Fourier transform (DFT) based beam alignment to mitigate the training overhead. We highlight that the reduced path loss at shorter distances can compensate for the beamforming losses typically associated with using far-field codebooks in near-field scenarios. Additionally, far-field beamforming in the near-field exhibits angular spread, with its width determined by the user's range and angle. Leveraging this relationship, we develop a correlation interferometry (CI) algorithm, termed CI-DFT, to efficiently estimate user angle and range parameters. Simulation results demonstrate that the proposed scheme achieves performance close to exhaustive search in terms of achievable rate while significantly reducing the training overhead by 87.5%.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
With the deployment of large antenna arrays at high frequency bands, future wireless communication systems are likely to operate in the radiative near-field. Unlike far-field beam steering, near-field beams can be focused within a spatial region of finite depth, enabling spatial multiplexing in both the angular and range dimensions. This paper derives the beamdepth for a generalized uniform rectangular array (URA) and investigates how array geometry influences the near-field beamdepth and the limits where near-field beamfocusing is achievable. To characterize the near-field boundary in terms of beamfocusing and spatial multiplexing gains, we define the effective beamfocusing Rayleigh distance (EBRD) for a generalized URA. Our analysis reveals that while a square URA achieves the narrowest beamdepth, the EBRD is maximized for a wide or tall URA. However, despite its narrow beamdepth, a square URA may experience a reduction in multiuser sum rate due to its severely constrained EBRD. Simulation results confirm that a wide or tall URA achieves a sum rate of 3.5 X more than that of a square URA, benefiting from the extended EBRD and improved spatial multiplexing capabilities.
Ahmed Hussain, Asmaa Abdallah, Ahmed Nasser, Abdulkadir Celik, Ahmed M. Eltawil
Conventional far-field multiple-input multiple-output (MIMO) channels are limited to a single spatial degree of freedom (DoF) under a line-of-sight (LoS) condition. In contrast, the radiative near field (NF) supports multiple spatial DoF, enabled by spherical wavefronts and the reduced spatial footprint at short ranges. While recent research indicates that the effective DoF (EDoF) increases in NF, experimental validation and clear identification of the transition distances remain limited. In this letter, we develop an intuitive framework for characterizing the EDoF of a ULA-based MIMO system and derive two complementary analytical expressions: a closed-form formulation that relates the EDoF to the physical transmit beamwidth and receive aperture, and a discrete formulation based on the discrete Fourier transform (DFT) domain angular decomposition of the NF spherical wavefront, which is well suited for experimental evaluation. We further introduce the effective MIMO Rayleigh distance (EMRD) and the maximum spatial multiplexing distance (MSMD), which mark the distances where the EDoF reduces to one and attains its maximum, respectively. Experimental measurements using widely spaced phased arrays closely match the theoretical EDoF trends and validate the proposed distance metrics.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Emil Björnson, Ahmed M. Eltawil
With the deployment of large antenna arrays at high-frequency bands, future wireless communication systems are likely to operate in the radiative near-field. Unlike far-field beam steering, near-field beams can be focused on a spatial region with a finite depth, enabling spatial multiplexing in the range dimension. Moreover, in the line-of-sight MIMO near-field, multiple spatial degrees of freedom (DoF) are accessible, akin to a scattering- rich environment. In this paper, we derive the beamdepth for a generalized uniform rectangular array (URA) and investigate how the array geometry influences near-field beamdepth and its limits. We define the effective beamfocusing Rayleigh distance (EBRD), to present a near-field boundary with respect to beamfocusing and spatial multiplexing gains for the generalized URA. Our results demonstrate that under a fixed element count constraint, the array geometry has a strong impact on beamdepth, whereas this effect diminishes under a fixed aperture length constraint. Moreover, compared to uniform square arrays, elongated configurations such as uniform linear arrays (ULAs) yield narrower beamdepth and extend the effective near-field region defined by the EBRD. Building on these insights, we design a polar codebook for compressed-sensing-based channel estimation that leverages our findings. Simulation results show that the proposed polar codebook achieves a 2 dB NMSE improvement over state-of-the-art methods. Additionally, we present an analytical expression to quantify the effective spatial DoF in the near-field, revealing that they are also constrained by the EBRD. Notably, the maximum spatial DoF is achieved with a ULA configuration, outperforming a square URA in this regard.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
Integrated sensing and communication (ISAC) has emerged as a transformative paradigm, enabling situationally aware and perceptive next-generation wireless networks through the co-design of shared network resources. With the adoption of millimeter-wave (mmWave) and terahertz (THz) frequency bands, ultra-massive MIMO (UM-MIMO) systems and holographic surfaces unlock the potential of near-field (NF) propagation, characterized by spherical wavefronts that facilitate beam manipulation in both angular and range domains. This paper presents a unified approach to near-field beam-training and sensing, introducing a dual-purpose codebook design that employs discrete Fourier transform (DFT)-based codebooks for coarse estimation of sensing parameters and polar codebooks for parameter refinement. Leveraging these range and angle estimates, a customized low-complexity space-time adaptive processing (STAP) technique is proposed for NF-ISAC to detect slow-moving targets and efficiently mitigate clutter. The interplay between codebooks and NF-STAP framework offers three key advantages: reduced communication beam training overhead, improved estimation accuracy, and minimal STAP computational complexity. Simulation results show that the proposed framework can reduce STAP complexity by three orders of magnitude, validating efficacy, and highlighting the potential of the proposed approach to seamlessly integrate NF communication and sensing functionalities in future wireless networks.
Ahmed Hussain, Asmaa Abdallah, Abdulkadir Celik, Ahmed M. Eltawil
Ultra-massive multiple-input multiple-output MIMO (UM-MIMO) leverages large antenna arrays at high frequencies, transitioning communication paradigm into the radiative near-field (NF), where spherical wavefronts enable full-vector estimation of both target location and velocity. However, location and motion parameters become inherently coupled in this regime, making their joint estimation computationally demanding. To overcome this, we propose a novel approach that projects the received two-dimensional space-time signal onto the angle-Doppler domain using a two-dimensional discrete Fourier transform (2D-DFT). Our analysis reveals that the resulting angular spread is centered at the target's true angle, with its width determined by the target's range. Similarly, transverse motion induces a Doppler spread centered at the true radial velocity, with the width of Doppler spread proportional to the transverse velocity. Exploiting these spectral characteristics, we develop a low-complexity algorithm that provides coarse estimates of angle, range, and velocity, which are subsequently refined using one-dimensional multiple signal classification (MUSIC) applied independently to each parameter. The proposed method enables accurate and efficient estimation of NF target motion parameters. Simulation results demonstrate a normalized mean squared error (NMSE) of -40 dB for location and velocity estimates compared to maximum likelihood estimation, while significantly reducing computational complexity.
Zainab Khan, Ahmed Hussain, Mukesh Thakur, Arto Hellas, Panos Papadimitratos
The use of Service-Based Architecture in modern telecommunications has exponentially increased Network Functions (NFs) and Application Programming Interfaces (APIs), creating substantial operational complexities in service discovery and management. We introduce \textit{NEFMind}, a framework leveraging parameter-efficient fine-tuning of open-source Large Language Models (LLMs) to address these challenges. It integrates three core components: synthetic dataset generation from Network Exposure Function (NEF) API specifications, model optimization through Quantized-Low-Rank Adaptation, and performance evaluation via GPT-4 Ref Score and BertScore metrics. Targeting 5G Service-Based Architecture APIs, our approach achieves 85% reduction in communication overhead compared to manual discovery methods. Experimental validation using the open-source Phi-2 model demonstrates exceptional API call identification performance at 98-100% accuracy. The fine-tuned Phi-2 model delivers performance comparable to significantly larger models like GPT-4 while maintaining computational efficiency for telecommunications infrastructure deployment. These findings validate domain-specific, parameter-efficient LLM strategies for managing complex API ecosystems in next-generation telecommunications networks.
Saeif Al-Hazbi, Ahmed Hussain, Savio Sciancalepore, Gabriele Oligeri, Panos Papadimitratos
Radio Frequency Fingerprinting (RFF) techniques promise to authenticate wireless devices at the physical layer based on inherent hardware imperfections introduced during manufacturing. Such RF transmitter imperfections are reflected into over-the-air signals, allowing receivers to accurately identify the RF transmitting source. Recent advances in Machine Learning, particularly in Deep Learning (DL), have improved the ability of RFF systems to extract and learn complex features that make up the device-specific fingerprint. However, integrating DL techniques with RFF and operating the system in real-world scenarios presents numerous challenges, originating from the embedded systems and the DL research domains. This paper systematically identifies and analyzes the essential considerations and challenges encountered in the creation of DL-based RFF systems across their typical development life-cycle, which include (i) data collection and preprocessing, (ii) training, and finally, (iii) deployment. Our investigation provides a comprehensive overview of the current open problems that prevent real deployment of DL-based RFF systems while also discussing promising research opportunities to enhance the overall accuracy, robustness, and privacy of these systems.
Salahuddin Salahuddin, Ahmed Hussain, Jussi Löppönen, Toni Jutila, Panos Papadimitratos
While Large Language Models (LLMs) demonstrate exceptional natural language capabilities, general-purpose models lack specialized domain knowledge for effective cybersecurity analysis. In this work, we investigate Domain-Adaptive Continuous Pretraining (DAP) as a methodology for enhancing cybersecurity understanding in pretrained LLMs while preserving general language capabilities. We systematically adapted three decoder-based architectures -- Llama-3.1-8B, DeepSeek-R1-Distill-Qwen-14B, and Llama-3.3-70B-Instruct -- using a curated 126-million-word cybersecurity corpus from standards, academic literature, and various other sources. Our approach employed constrained training parameters and distributed FSDP training to balance domain specialization with knowledge preservation. Evaluation across three cybersecurity benchmarks, namely, CTI-MCQ, CyberMetric, and SecEval, demonstrates consistent improvements post-adaptation. The Llama-3.3-70B-Ins-DAP model achieved state-of-the-art accuracies of 0.718, 0.933, and 0.864, respectively, outperforming specialized models, including Llama-Primus-Base. Notably, competitive performance was achieved using substantially smaller datasets (118.8 million versus 2.77 billion tokens), demonstrating efficient domain specialization viability. We establish that targeted continuous pretraining enables effective cybersecurity domain adaptation with computational feasibility, providing foundations for specialized AI assistants in threat analysis, vulnerability assessment, and security documentation while challenging prevailing assumptions about data requirements for LLM specialization.
Johan Wahréus, Ahmed Hussain, Panos Papadimitratos
Large Language Models (LLMs) are increasingly deployed for task automation and content generation, yet their safety mechanisms remain vulnerable to circumvention through different jailbreaking techniques. In this paper, we introduce \textit{Content Concretization} (CC), a novel jailbreaking technique that iteratively transforms abstract malicious requests into concrete, executable implementations. CC is a two-stage process: first, generating initial LLM responses using lower-tier, less constrained safety filters models, then refining them through higher-tier models that process both the preliminary output and original prompt. We evaluate our technique using 350 cybersecurity-specific prompts, demonstrating substantial improvements in jailbreak Success Rates (SRs), increasing from 7\% (no refinements) to 62\% after three refinement iterations, while maintaining a cost of 7.5\textcent~per prompt. Comparative A/B testing across nine different LLM evaluators confirms that outputs from additional refinement steps are consistently rated as more malicious and technically superior. Moreover, manual code analysis reveals that generated outputs execute with minimal modification, although optimal deployment typically requires target-specific fine-tuning. With eventual improved harmful code generation, these results highlight critical vulnerabilities in current LLM safety frameworks.
Nicolae Filat, Ahmed Hussain, Konstantinos Kalogiannis, Elena Burceanu
Streaming Continual Learning (CL) typically converts a continuous stream into a sequence of discrete tasks through temporal partitioning. We argue that this temporal taskification step is not a neutral preprocessing choice, but a structural component of evaluation: different valid splits of the same stream can induce different CL regimes and therefore different benchmark conclusions. To study this effect, we introduce a taskification-level framework based on plasticity and stability profiles, a profile distance between taskifications, and Boundary-Profile Sensitivity (BPS), which diagnoses how strongly small boundary perturbations alter the induced regime before any CL model is trained. We evaluate continual finetuning, Experience Replay, Elastic Weight Consolidation, and Learning without Forgetting on network traffic forecasting with CESNET-Timeseries24, keeping the stream, model, and training budget fixed while varying only the temporal taskification. Across 9-, 30-, and 44-day splits, we observe substantial changes in forecasting error, forgetting, and backward transfer, showing that taskification alone can materially affect CL evaluation. We further find that shorter taskifications induce noisier distribution-level patterns, larger structural distances, and higher BPS, indicating greater sensitivity to boundary perturbations. These results show that benchmark conclusions in streaming CL depend not only on the learner and the data stream, but also on how that stream is taskified, motivating temporal taskification as a first-class evaluation variable.
Johan Wahréus, Ahmed Hussain, Panos Papadimitratos
Large Language Models (LLMs) have transformed task automation and content generation across various domains while incorporating safety filters to prevent misuse. We introduce a novel jailbreaking framework that employs distributed prompt processing combined with iterative refinements to bypass these safety measures, particularly in generating malicious code. Our architecture consists of four key modules: prompt segmentation, parallel processing, response aggregation, and LLM-based jury evaluation. Tested on 500 malicious prompts across 10 cybersecurity categories, the framework achieves a 73.2% Success Rate (SR) in generating malicious code. Notably, our comparative analysis reveals that traditional single-LLM judge evaluation overestimates SRs (93.8%) compared to our LLM jury system (73.2%), with manual verification confirming that single-judge assessments often accept incomplete implementations. Moreover, we demonstrate that our distributed architecture improves SRs by 12% over the non-distributed approach in an ablation study, highlighting both the effectiveness of distributed prompt processing and the importance of robust evaluation methodologies in assessing jailbreak attempts.
Mayukh R. Gangopadhyay, Hussain Ahmed Khan, Yogesh
May 30, 2022·astro-ph.CO·PDF We study two of the most theoretically promising models of inflation, namely the Natural inflation and the Mutated Hilltop inflation, in the Einstein-Gauss Bonnet(EGB) gravity framework. In this work, we try to explore these models in EGB framework, keeping the observations from $GW170817$ on the speed of gravitational wave to be equal to the speed of light. This has direct implication on the non-minimal coupling to the Gauss-Bonnet invariant in the action. Thus, the effective potential gets new features. We have not only analysed the inflationary dynamics, but also the reheating dynamics and finally the corresponding energy spectrum of the gravitational wave.
Ahmed Mohamed Hussain, Nada Abughanam, Panos Papadimitratos
The deployment of the Internet of Things (IoT) in smart cities and critical infrastructure has enhanced connectivity and real-time data exchange but introduced significant security challenges. While effective, cryptography can often be resource-intensive for small-footprint resource-constrained (i.e., IoT) devices. Radio Frequency Fingerprinting (RFF) offers a promising authentication alternative by using unique RF signal characteristics for device identification at the Physical (PHY)-layer, without resorting to cryptographic solutions. The challenge is two-fold: how to deploy such RFF in a large scale and for resource-constrained environments. Edge computing, processing data closer to its source, i.e., the wireless device, enables faster decision-making, reducing reliance on centralized cloud servers. Considering a modest edge device, we introduce two truly lightweight Edge AI-based RFF schemes tailored for resource-constrained devices. We implement two Deep Learning models, namely a Convolution Neural Network and a Transformer-Encoder, to extract complex features from the IQ samples, forming device-specific RF fingerprints. We convert the models to TensorFlow Lite and evaluate them on a Raspberry Pi, demonstrating the practicality of Edge deployment. Evaluations demonstrate the Transformer-Encoder outperforms the CNN in identifying unique transmitter features, achieving high accuracy (> 0.95) and ROC-AUC scores (> 0.90) while maintaining a compact model size of 73KB, appropriate for resource-constrained devices.
Hexu Li, Konstantinos Kalogiannis, Ahmed Mohamed Hussain, Panos Papadimitratos
Vehicle platooning, with vehicles traveling in close formation coordinated through Vehicle-to-Everything (V2X) communications, offers significant benefits in fuel efficiency and road utilization. However, it is vulnerable to sophisticated falsification attacks by authenticated insiders that can destabilize the formation and potentially cause catastrophic collisions. This paper addresses this challenge: misbehavior detection in vehicle platooning systems. We present AttentionGuard, a transformer-based framework for misbehavior detection that leverages the self-attention mechanism to identify anomalous patterns in mobility data. Our proposal employs a multi-head transformer-encoder to process sequential kinematic information, enabling effective differentiation between normal mobility patterns and falsification attacks across diverse platooning scenarios, including steady-state (no-maneuver) operation, join, and exit maneuvers. Our evaluation uses an extensive simulation dataset featuring various attack vectors (constant, gradual, and combined falsifications) and operational parameters (controller types, vehicle speeds, and attacker positions). Experimental results demonstrate that AttentionGuard achieves up to 0.95 F1-score in attack detection, with robust performance maintained during complex maneuvers. Notably, our system performs effectively with minimal latency (100ms decision intervals), making it suitable for real-time transportation safety applications. Comparative analysis reveals superior detection capabilities and establishes the transformer-encoder as a promising approach for securing Cooperative Intelligent Transport Systems (C-ITS) against sophisticated insider threats.
Konstantinos Kalogiannis, Ahmed Mohamed Hussain, Hexu Li, Panos Papadimitratos
Vehicular platooning promises transformative improvements in transportation efficiency and safety through the coordination of multi-vehicle formations enabled by Vehicle-to-Everything (V2X) communication. However, the distributed nature of platoon coordination creates security vulnerabilities, allowing authenticated vehicles to inject falsified kinematic data, compromise operational stability, and pose a threat to passenger safety. Traditional misbehaviour detection approaches, which rely on plausibility checks and statistical methods, suffer from high False Positive (FP) rates and cannot capture the complex temporal dependencies inherent in multi-vehicle coordination dynamics. We present Attention In Motion (AIMformer), a transformer-based framework specifically tailored for real-time misbehaviour detection in vehicular platoons with edge deployment capabilities. AIMformer leverages multi-head self-attention mechanisms to simultaneously capture intra-vehicle temporal dynamics and inter-vehicle spatial correlations. It incorporates global positional encoding with vehicle-specific temporal offsets to handle join/exit maneuvers. We propose a Precision-Focused Binary Cross-Entropy (PFBCE) loss function that penalizes FPs to meet the requirements of safety-critical vehicular systems. Extensive evaluation across 4 platoon controllers, multiple attack vectors, and diverse mobility scenarios demonstrates superior performance ($\geq$ 0.93) compared to state-of-the-art baseline architectures. A comprehensive deployment analysis utilizing TensorFlow Lite (TFLite), Open Neural Network Exchange (ONNX), and TensorRT achieves sub-millisecond inference latency, making it suitable for real-time operation on resource-constrained edge platforms. Hence, validating AIMformer is viable for both in-vehicle and roadside infrastructure deployment.