Ali Goli, Jason Huang, David Reiley, Nickolai M. Riabov
A randomized experiment with almost 35 million Pandora listeners enables us to measure the sensitivity of consumers to advertising, an important topic of study in the era of ad-supported digital content provision. The experiment randomized listeners into nine treatment groups, each of which received a different level of audio advertising interrupting their music listening, with the highest treatment group receiving more than twice as many ads as the lowest treatment group. By maintaining consistent treatment assignment for 21 months, we measure long-run demand effects and find ad-load sensitivity three times greater than what we would have obtained from a month-long experiment. We show the negative impact on the number of hours listened, days listened, and probability of listening at all in the final month. Using an experimental design that separately varies the number of commercial interruptions per hour and the number of ads per commercial interruption, we find that listeners primarily respond to the total number of ads per hour, with a slight preference for more frequent but shorter ad breaks. Lastly, we find that increased ad load led to an increase in the number of paid ad-free subscriptions to Pandora. Importantly, we show that observational methods often lead to biased or even directionally incorrect estimates of these effects, highlighting the value of experimental data.
Ali Goli, Mahdieh Alizadeh, Hadi Sadoghi Yazdi
Manifold learning techniques, such as Locally linear embedding (LLE), are designed to preserve the local neighborhood structures of high-dimensional data during dimensionality reduction. Traditional LLE employs Euclidean distance to define neighborhoods, which can struggle to capture the intrinsic geometric relationships within complex data. A novel approach, Adaptive locally linear embedding(ALLE), is introduced to address this limitation by incorporating a dynamic, data-driven metric that enhances topological preservation. This method redefines the concept of proximity by focusing on topological neighborhood inclusion rather than fixed distances. By adapting the metric based on the local structure of the data, it achieves superior neighborhood preservation, particularly for datasets with complex geometries and high-dimensional structures. Experimental results demonstrate that ALLE significantly improves the alignment between neighborhoods in the input and feature spaces, resulting in more accurate and topologically faithful embeddings. This approach advances manifold learning by tailoring distance metrics to the underlying data, providing a robust solution for capturing intricate relationships in high-dimensional datasets.
Ali Goli, S. Hamed Hassani, Rudiger Urbanke
We consider the problem of determining the trade-off between the rate and the block-length of polar codes for a given block error probability when we use the successive cancellation decoder. We take the sum of the Bhattacharyya parameters as a proxy for the block error probability, and show that there exists a universal parameter $μ$ such that for any binary memoryless symmetric channel $W$ with capacity $I(W)$, reliable communication requires rates that satisfy $R< I(W)-αN^{-\frac{1}μ}$, where $α$ is a positive constant and $N$ is the block-length. We provide lower bounds on $μ$, namely $μ\geq 3.553$, and we conjecture that indeed $μ=3.627$, the parameter for the binary erasure channel.
Ali Goli, Amandeep Singh
We explore the viability of Large Language Models (LLMs), specifically OpenAI's GPT-3.5 and GPT-4, in emulating human survey respondents and eliciting preferences, with a focus on intertemporal choices. Leveraging the extensive literature on intertemporal discounting for benchmarking, we examine responses from LLMs across various languages and compare them to human responses, exploring preferences between smaller, sooner, and larger, later rewards. Our findings reveal that both GPT models demonstrate less patience than humans, with GPT-3.5 exhibiting a lexicographic preference for earlier rewards, unlike human decision-makers. Though GPT-4 does not display lexicographic preferences, its measured discount rates are still considerably larger than those found in humans. Interestingly, GPT models show greater patience in languages with weak future tense references, such as German and Mandarin, aligning with existing literature that suggests a correlation between language structure and intertemporal preferences. We demonstrate how prompting GPT to explain its decisions, a procedure we term "chain-of-thought conjoint," can mitigate, but does not eliminate, discrepancies between LLM and human responses. While directly eliciting preferences using LLMs may yield misleading results, combining chain-of-thought conjoint with topic modeling aids in hypothesis generation, enabling researchers to explore the underpinnings of preferences. Chain-of-thought conjoint provides a structured framework for marketers to use LLMs to identify potential attributes or factors that can explain preference heterogeneity across different customers and contexts.
Hossein Hosseini, Ali Goli, Neda Barzegar Marvasti, Masoume Azghani, Farokh Marvasti
In this paper, we propose a method for image block loss restoration based on the notion of sparse representation. We use the sparsity pattern as side information to efficiently restore block losses by iteratively imposing the constraints of spatial and transform domains on the corrupted image. Two novel features, including a pre-interpolation and a criterion for stopping the iterations, are proposed to improve the performance. Also, to deal with practical applications, we develop a technique to transmit the side information along with the image. In this technique, we first compress the side information and then embed its LDPC coded version in the least significant bits of the image pixels. This technique ensures the error-free transmission of the side information, while causing only a small perturbation on the transmitted image. Mathematical analysis and extensive simulations are performed to validate the method and investigate the efficiency of the proposed techniques. The results verify that the proposed method outperforms its counterparts for image block loss restoration.