Martino Trevisan, Ali Safari Khatouni, Danilo Giordano
Mobile networks have become ubiquitous, but running experiments on them is expensive and hard, given their complexity and diversity. Emulation can be the solution, and, with ERRANT, we offer a realistic emulator of mobile networks based on a large measurement campaign on European Mobile Network Operators. It improves the current situation, where tools and emulators only implement pre-defined profiles, with built-in parameters, which are not supported with real measurements.
Nikhil Jha, Thomas Favale, Luca Vassio, Martino Trevisan, Marco Mellia
With the advent of big data and the birth of the data markets that sell personal information, individuals' privacy is of utmost importance. The classical response is anonymization, i.e., sanitizing the information that can directly or indirectly allow users' re-identification. The most popular solution in the literature is the k-anonymity. However, it is hard to achieve k-anonymity on a continuous stream of data, as well as when the number of dimensions becomes high.In this paper, we propose a novel anonymization property called z-anonymity. Differently from k-anonymity, it can be achieved with zero-delay on data streams and it is well suited for high dimensional data. The idea at the base of z-anonymity is to release an attribute (an atomic information) about a user only if at least z - 1 other users have presented the same attribute in a past time window. z-anonymity is weaker than k-anonymity since it does not work on the combinations of attributes, but treats them individually. In this paper, we present a probabilistic framework to map the z-anonymity into the k-anonymity property. Our results show that a proper choice of the z-anonymity parameters allows the data curator to likely obtain a k-anonymized dataset, with a precisely measurable probability. We also evaluate a real use case, in which we consider the website visits of a population of users and show that z-anonymity can work in practice for obtaining the k-anonymity too.
Martino Trevisan
The last two decades witnessed tremendous advances in the Information and Communications Technologies. Beside improvements in computational power and storage capacity, communication networks carry nowadays an amount of data which was not envisaged only few years ago. Together with their pervasiveness, network complexity increased at the same pace, leaving operators and researchers with few instruments to understand what happens in the networks, and, on the global scale, on the Internet. Fortunately, recent advances in data science and machine learning come to the rescue of network analysts, and allow analyses with a level of complexity and spatial/temporal scope not possible only 10 years ago. In my thesis, I take the perspective of an Internet Service Provider (ISP), and illustrate challenges and possibilities of analyzing the traffic coming from modern operational networks. I make use of big data and machine learning algorithms, and apply them to datasets coming from passive measurements of ISP and University Campus networks. The marriage between data science and network measurements is complicated by the complexity of machine learning algorithms, and by the intrinsic multi-dimensionality and variability of this kind of data. As such, my work proposes and evaluates novel techniques, inspired from popular machine learning approaches, but carefully tailored to operate with network traffic.
Carlos H. G. Ferreira, Fabricio Murai, Ana P. C. Silva, Jussara M. Almeida, Martino Trevisan, Luca Vassio, Marco Mellia, Idilio Drago
Instagram has been increasingly used as a source of information especially among the youth. As a result, political figures now leverage the platform to spread opinions and political agenda. We here analyze online discussions on Instagram, notably in political topics, from a network perspective. Specifically, we investigate the emergence of communities of co-commenters, that is, groups of users who often interact by commenting on the same posts and may be driving the ongoing online discussions. In particular, we are interested in salient co-interactions, i.e., interactions of co-commenters that occur more often than expected by chance and under independent behavior. Unlike casual and accidental co-interactions which normally happen in large volumes, salient co-interactions are key elements driving the online discussions and, ultimately, the information dissemination. We base our study on the analysis of 10 weeks of data centered around major elections in Brazil and Italy, following both politicians and other celebrities. We extract and characterize the communities of co-commenters in terms of topological structure, properties of the discussions carried out by community members, and how some community properties, notably community membership and topics, evolve over time. We show that communities discussing political topics tend to be more engaged in the debate by writing longer comments, using more emojis, hashtags and negative words than in other subjects. Also, communities built around political discussions tend to be more dynamic, although top commenters remain active and preserve community membership over time. Moreover, we observe a great diversity in discussed topics over time: whereas some topics attract attention only momentarily, others, centered around more fundamental political discussions, remain consistently active over time.
Leonardo Regano, Ali Safari Khatouni, Martino Trevisan, Alessio Viticchie
In recent years, ethical issues in the networking field are getting moreimportant. In particular, there is a consistent debate about how Internet Service Providers (ISPs) should collect and treat network measurements. This kind of information, such as flow records, carry interesting knowledge from multiple points of view: research, traffic engineering and e-commerce can benefit from measurements retrievable through inspection of network traffic. Nevertheless, in some cases they can carry personal information about the users exposed to monitoring, and so generates several ethical issues. Modern web is very different from the one we could experience few years ago; web services converged to few protocols (i.e., HyperText Transfer Protocol (HTTP) and HTTPS) and always bigger share of encrypted traffic. The aim of this work is to provide an insight about which information is still visible to ISPs in the modern web and to what extent it carries personal information. We show ethical issues deriving by this new situation and provide general guidelines and best-practices to cope with the collection of network traffic measurements.
Andrea Di Domenico, Gianluca Perna, Martino Trevisan, Luca Vassio, Danilo Giordano
Cloud gaming is a new class of services that promises to revolutionize the videogame market. It allows the user to play a videogame with basic equipment while using a remote server for the actual execution. The multimedia content is streamed through the network from the server to the user. This service requires low latency and a large bandwidth to work properly with low response time and high-definition video. Three of the leading tech companies, (Google, Sony and NVIDIA) entered this market with their own products, and others, like Microsoft and Amazon, are planning to launch their own platforms in the near future. However, these companies released so far little information about their cloud gaming operation and how they utilize the network. In this work, we study these new cloud gaming services from the network point of view. We collect more than 200 packet traces under different application settings and network conditions for 3 cloud gaming services, namely Stadia from Google, GeForce Now from NVIDIA and PS Now from Sony. We analyze the employed protocols and the workload they impose on the network. We find that GeForce Now and Stadia use the RTP protocol to stream the multimedia content, with the latter relying on the standard WebRTC APIs. They result in bandwidth-hungry and consume up to 45 Mbit/s, depending on the network and video quality. PS Now instead uses only undocumented protocols and never exceeds 13 Mbit/s.
Martino Trevisan, Danilo Giordano, Idilio Drago, Ali Safari Khatouni
The third version of the Hypertext Transfer Protocol (HTTP) is currently in its final standardization phase by the IETF. Besides better security and increased flexibility, it promises benefits in terms of performance. HTTP/3 adopts a more efficient header compression schema and replaces TCP with QUIC, a transport protocol carried over UDP, originally proposed by Google and currently under standardization too. Although HTTP/3 early implementations already exist and some websites announce its support, it has been subject to few studies. In this work, we provide a first measurement study on HTTP/3. We testify how, during 2020, it has been adopted by some of the leading Internet companies such as Google, Facebook and Cloudflare. We run a large-scale measurement campaign toward thousands of websites adopting HTTP/3, aiming at understanding to what extent it achieves better performance than HTTP/2. We find that adopting websites often host most web page objects on third-party servers, which support only HTTP/2 or even HTTP/1.1. Our experiments show that HTTP/3 provides sizable benefits only in scenarios with high latency or very poor bandwidth. Despite the adoption of QUIC, we do not find benefits in case of high packet loss, but we observe large diversity across website providers' infrastructures.
Nikhil Jha, Martino Trevisan, Luca Vassio, Marco Mellia
To protect users' privacy, legislators have regulated the usage of tracking technologies, mandating the acquisition of users' consent before collecting data. Consequently, websites started showing more and more consent management modules -- i.e., Privacy Banners -- the visitors have to interact with to access the website content. They challenge the automatic collection of Web measurements, primarily to monitor the extensiveness of tracking technologies but also to measure Web performance in the wild. Privacy Banners in fact limit crawlers from observing the actual website content. In this paper, we present a thorough measurement campaign focusing on popular websites in Europe and the US, visiting both landing and internal pages from different countries around the world. We engineer Priv-Accept, a Web crawler able to accept the privacy policies, as most users would do in practice. This let us compare how webpages change before and after. Our results show that all measurements performed not dealing with the Privacy Banners offer a very biased and partial view of the Web. After accepting the privacy policies, we observe an increase of up to 70 trackers, which in turn slows down the webpage load time by a factor of 2x-3x.
Martino Trevisan
Passive monitoring is a network measurement technique which analyzes the traffic carried by an operational network. It has several applications for traffic engineering, Quality of Experience monitoring and cyber security. However, it entails the processing of personal information, thus, threatening users' privacy. In this work, we propose DPMon, a tool to run privacy-preserving queries to a dataset of passive network measurements. It exploits differential privacy to perturb the output of the query to preserve users' privacy. DPMon can exploit big data infrastructures running Apache Spark and operate on different data formats. We show that DPMon allows extracting meaningful insights from the data, while at the same time controlling the amount of disclosed information.
Thomas Favale, Francesca Soro, Martino Trevisan, Idilio Drago, Marco Mellia
The COVID-19 pandemic led to the adoption of severe measures to counteract the spread of the infection. Social distancing and lockdown measures modifies people's habits, while the Internet gains a major role to support remote working, e-teaching, online collaboration, gaming, video streaming, etc. All these sudden changes put unprecedented stress on the network. In this paper we analyze the impact of the lockdown enforcement on the Politecnico di Torino campus network. Right after the school shutdown on the 25th of February, PoliTO deployed its own in-house solution for virtual teaching. Ever since, the university provides about 600 virtual classes daily, serving more than 16,000 students per day. Here, we report a picture of how the pandemic changed PoliTO's network traffic. We first focus on the usage of remote working and collaborative platforms. Given the peculiarity of PoliTO in-house online teaching solution, we drill down on it, characterizing both the audience and the network footprint. Overall, we present a snapshot of the abrupt changes on campus traffic and learning due to COVID-19, and testify how the Internet has proved robust to successfully cope with challenges and maintain the university operations.
Martino Trevisan, Luca Vassio, Danilo Giordano
The COVID-19 pandemic is not only having a heavy impact on healthcare but also changing people's habits and the society we live in. Countries such as Italy have enforced a total lockdown lasting several months, with most of the population forced to remain at home. During this time, online social networks, more than ever, have represented an alternative solution for social life, allowing users to interact and debate with each other. Hence, it is of paramount importance to understand the changing use of social networks brought about by the pandemic. In this paper, we analyze how the interaction patterns around popular influencers in Italy changed during the first six months of 2020, within Instagram and Facebook social networks. We collected a large dataset for this group of public figures, including more than 54 million comments on over 140 thousand posts for these months. We analyze and compare engagement on the posts of these influencers and provide quantitative figures for aggregated user activity. We further show the changes in the patterns of usage before and during the lockdown, which demonstrated a growth of activity and sizable daily and weekly variations. We also analyze the user sentiment through the psycholinguistic properties of comments, and the results testified the rapid boom and disappearance of topics related to the pandemic. To support further analyses, we release the anonymized dataset.
Martino Trevisan, Stefano Traverso, Hassan Metwalley, Marco Mellia
In 2002, the European Union (EU) introduced the ePrivacy Directive to regulate the usage of online tracking technologies. Its aim is to make tracking mechanisms explicit while increasing privacy awareness in users. It mandates websites to ask for explicit consent before using any kind of profiling methodology, e.g., cookies. Starting from 2013 the Directive is mandatory, and now most of European websites embed a "Cookie Bar" to explicitly ask user's consent. To the best of our knowledge, no study focused in checking whether a website respects the Directive. For this, we engineer CookieCheck, a simple tool that makes this check automatic. We use it to run a measurement campaign on more than 35,000 websites. Results depict a dramatic picture: 65% of websites do not respect the Directive and install tracking cookies before the user is even offered the accept button. In few words, we testify the failure of the ePrivacy Directive. Among motivations, we identify the absence of rules enabling systematic auditing procedures, the lack of tools to verify its implementation by the deputed agencies, and the technical difficulties of webmasters in implementing it.
Laura Arditti, Martino Trevisan, Luca Vassio, Alberto De Lazzari, Alberto Danese
Payment platforms have significantly evolved in recent years to keep pace with the proliferation of online and cashless payments. These platforms are increasingly aligned with online social networks, allowing users to interact with each other and transfer small amounts of money in a Peer-to-Peer fashion. This poses new challenges for analysing payment data, as traditional methods are only user-centric or business-centric and neglect the network users build during the interaction. This paper proposes a first methodology for measuring user value in modern payment platforms. We combine quantitative user-centric metrics with an analysis of the graph created by users' activities and its topological features inspired by the evolution of opinions in social networks. We showcase our approach using a dataset from a large operational payment platform and show how it can support business decisions and marketing campaign design, e.g., by targeting specific users.
Giuseppe Siracusano, Roberto Bifulco, Martino Trevisan, Tobias Jacobs, Simon Kuenzer, Stefano Salsano, Nicola Blefari-Melazzi, Felipe Huici
We explore the opportunities and design options enabled by novel SDN and NFV technologies, by re-designing a dynamic Content Delivery Network (CDN) service. Our system, named MOSTO, provides performance levels comparable to that of a regular CDN, but does not require the deployment of a large distributed infrastructure. In the process of designing the system, we identify relevant functions that could be integrated in the future Internet infrastructure. Such functions greatly simplify the design and effectiveness of services such as MOSTO. We demonstrate our system using a mixture of simulation, emulation, testbed experiments and by realizing a proof-of-concept deployment in a planet-wide commercial cloud system.
Gabriele Merlach, Damiano Ravalico, Martino Trevisan, Fabio Palmese, Giovanni Baccichet, Alessandro E. C. Redondi
Flow records, that summarize the characteristics of traffic flows, represent a practical and powerful way to monitor a network. While they already offer significant compression compared to full packet captures, their sheer volume remains daunting, especially for large Internet Service Providers (ISPs). In this paper, we investigate several lossy compression techniques to further reduce storage requirements while preserving the utility of flow records for key tasks, such as predicting the domain name of contacted servers. Our study evaluates scalar quantization, Principal Component Analysis (PCA), and vector quantization, applied to a real-world dataset from an operational campus network. Results reveal that scalar quantization provides the best tradeoff between compression and accuracy. PCA can preserve predictive accuracy but hampers subsequent entropic compression, and while vector quantization shows promise, it struggles with scalability due to the high-dimensional nature of the data. These findings result in practical strategies for optimizing flow record storage in large-scale monitoring scenarios.
Fabio Palmese, Gabriele Merlach, Damiano Ravalico, Martino Trevisan, Alessandro E. C. Redondi
Network traffic analysis increasingly relies on feature-based representations to support monitoring and security in the presence of pervasive encryption. Although features are more compact than raw packet traces, their storage has become a scalability bottleneck from large-scale core networks to resource-constrained Internet of Things (IoT) environments. This article investigates task-aware lossy compression strategies that reduce the storage footprint of traffic features while preserving analytics accuracy. Using website classification in core networks and device identification in IoT environments as representative use cases, we show that simple, semantics-preserving compression techniques expose stable operating regions that balance storage efficiency and task performance. These results highlight compression as a first-class design dimension in scalable network monitoring systems.
Martino Trevisan, Luca Vassio, Idilio Drago, Marco Mellia, Fabricio Murai, Flavio Figueiredo, Ana Paula Couto da Silva, Jussara M. Almeida
Online Social Networks (OSNs) allow personalities and companies to communicate directly with the public, bypassing filters of traditional medias. As people rely on OSNs to stay up-to-date, the political debate has moved online too. We witness the sudden explosion of harsh political debates and the dissemination of rumours in OSNs. Identifying such behaviour requires a deep understanding on how people interact via OSNs during political debates. We present a preliminary study of interactions in a popular OSN, namely Instagram. We take Italy as a case study in the period before the 2019 European Elections. We observe the activity of top Italian Instagram profiles in different categories: politics, music, sport and show. We record their posts for more than two months, tracking "likes" and comments from users. Results suggest that profiles of politicians attract markedly different interactions than other categories. People tend to comment more, with longer comments, debating for longer time, with a large number of replies, most of which are not explicitly solicited. Moreover, comments tend to come from a small group of very active users. Finally, we witness substantial differences when comparing profiles of different parties.
Nikhil Jha, Martino Trevisan, Emilio Leonardi, Marco Mellia
Web tracking through third-party cookies is considered a threat to users' privacy and is supposed to be abandoned in the near future. Recently, Google proposed the Topics API framework as a privacy-friendly alternative for behavioural advertising. Using this approach, the browser builds a user profile based on navigation history, which advertisers can access. The Topics API has the possibility of becoming the new standard for behavioural advertising, thus it is necessary to fully understand its operation and find possible limitations. This paper evaluates the robustness of the Topics API to a re-identification attack where an attacker reconstructs the user profile by accumulating user's exposed topics over time to later re-identify the same user on a different website. Using real traffic traces and realistic population models, we find that the Topics API mitigates but cannot prevent re-identification to take place, as there is a sizeable chance that a user's profile is unique within a website's audience. Consequently, the probability of correct re-identification can reach 15-17%, considering a pool of 1,000 users. We offer the code and data we use in this work to stimulate further studies and the tuning of the Topic API parameters.
Nikhil Jha, Martino Trevisan, Marco Mellia, Daniel Fernandez, Rodrigo Irarrazaval
In response to growing concerns about user privacy, legislators have introduced new regulations and laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) that force websites to obtain user consent before activating personal data collection, fundamental to providing targeted advertising. The cornerstone of this consent-seeking process involves the use of Privacy Banners, the technical mechanism to collect users' approval for data collection practices. Consent management platforms (CMPs) have emerged as practical solutions to make it easier for website administrators to properly manage consent, allowing them to outsource the complexities of managing user consent and activating advertising features. This paper presents a detailed and longitudinal analysis of the evolution of CMPs spanning nine years. We take a twofold perspective: Firstly, thanks to the HTTP Archive dataset, we provide insights into the growth, market share, and geographical spread of CMPs. Noteworthy observations include the substantial impact of GDPR on the proliferation of CMPs in Europe. Secondly, we analyse millions of user interactions with a medium-sized CMP present in thousands of websites worldwide. We observe how even small changes in the design of Privacy Banners have a critical impact on the user's giving or denying their consent to data collection. For instance, over 60% of users do not consent when offered a simple "one-click reject-all" option. Conversely, when opting out requires more than one click, about 90% of users prefer to simply give their consent. The main objective is in fact to eliminate the annoying privacy banner rather the make an informed decision. Curiously, we observe iOS users exhibit a higher tendency to accept cookies compared to Android users, possibly indicating greater confidence in the privacy offered by Apple devices.
Andrea Morichetta, Martino Trevisan, Luca Vassio
Web pornography represents a large fraction of the Internet traffic, with thousands of websites and millions of users. Studying web pornography consumption allows understanding human behaviors and it is crucial for medical and psychological research. However, given the lack of public data, these works typically build on surveys, limited by different factors, e.g. unreliable answers that volunteers may (involuntarily) provide. In this work, we collect anonymized accesses to pornography websites using HTTP-level passive traces. Our dataset includes about 15 000 broadband subscribers over a period of 3 years. We use it to provide quantitative information about the interactions of users with pornographic websites, focusing on time and frequency of use, habits, and trends. We distribute our anonymized dataset to the community to ease reproducibility and allow further studies.