From Stars to Dogs – Identifying “Out-Of-Favor” Products on E-Commerce Platforms? Data Analytic Approach to System Design

By Anton Ivanov, Abhijeet Ghoshal, and Akhil Kumar📧 

In Production and Operations Management, 2024. https://doi.org/10.1177/10591478241282327

Online retail platforms are increasingly challenged by the proliferation of low-quality products, which may damage their reputation and sales. To address this problem, we propose a system architecture to proactively identify products that are likely to go “out of favor”. Our approach uses historical data to extract useful information from customer ratings and textual reviews. Available data are fed into a state-of-the-art deep learning sequence model to forecast future ratings. We then analyze rating trends, extracting hyperparameters that a binary classifier uses to label products as “out of favor” or not. We tested this system on an Amazon dataset comprising nearly 800,000 observations across 2,826 electronics products. Our results show that the Long Short-Term Memory (LSTM) model excels in forecasting future product ratings compared to other benchmarks. Ablation analysis shows sentiment-related features significantly improve rating forecasts by up to 40%, with review topics adding 10% and other review characteristics 4%. Counterintuitively, topic extraction from reviews does not provide substantial benefits, despite the heavy computational resources it requires. Finally, the two-stage classification process, which leverages time-series data and rating trends, offers a more stable and robust performance than conventional single-stage methods. We provide considerations for system architecture development through robustness checks ensuring its resilience to stressors. Our experiments indicate that rating trends can change in subtle ways over time, leading a promising “star” product to turn into a liability (“dog”). E-commerce platforms can use the proposed system architecture proactively to identify and remove potentially dubious products instead of waiting to take reactive action.

Timely Quality Problem Resolution in Peer-Production Systems: The Impact of Bots, Policy Citations, and Contributor Experience

By Vitali Mindel, Aleksi Aaltonen, Arun Rai, Lars Mathiassen, and Wael Jabr📧

In Information Systems Research, 2024. 0(0). https://doi.org/10.1287/isre.2020.0485

Although online peer-production systems have proven to be effective in producing high-quality content, their open call for participation makes them susceptible to ongoing quality problems. A key concern is that the problems should be addressed quickly to prevent low-quality content from remaining in place for extended periods. We examine the impacts of two control mechanisms, bots and policy citations, and the number of contributors, with and without prior experience in editing an article, on the cleanup time of 4,473 quality problem events in Wikipedia. We define cleanup time as the time it takes to resolve a quality problem once it has been detected in an article. Using an accelerated failure time model, we find that the number of bots editing an article during a quality problem event has no effect on cleanup time; that citing policies to justify edits during the event is associated with a longer cleanup time; and that more contributors, with or without prior experience in editing the article, are associated with a shorter cleanup time. We also find important interactions between each of the two control mechanisms and the number of different types of contributors. There is a marginal increase in cleanup time that is larger when an increase in the number of contributors is accompanied by fewer bots editing the article during a quality problem event. This interaction effect is more pronounced when increasing the number of contributors without prior experience in editing the article. Further, there is a marginal decrease in cleanup time that is larger when an increase in the number of contributors, with or without prior experience in editing the article, is accompanied by fewer policy citations. Taken together, our results show that the use of bots and policy citations as control mechanisms must be considered in conjunction with the number of contributors with and without prior experience in editing an article. Accordingly, the number of contributors and their experience alone may not explain important outcomes in peer production; it is also important to find an appropriate mix of different control mechanisms and types of contributors to address quality problems quickly.

Keywords: Bot; Contributor experience; Control mechanism; Linus’s law; Peer production; Policy; Quality control

Model-Based Co-Clustering in Customer Targeting Utilizing Large-Scale Online Product Rating Networks

By Qian Chen,📧 Amal Agarwal, Duncan K.H. Fong, Wayne S. DeSarbo, and Lingzhou Xue

In Journal of Business & Economic Statistics, 2024. https://doi.org/10.1080/07350015.2024.2395423

Given the widely available online customer ratings on products, the individual-level rating prediction and clustering of customers and products are increasingly important for sellers to create targeting strategies for expanding the customer base and improving product ratings. However, the massive missing data problem is a significant challenge for modeling online product ratings. To address this issue, we propose a new co-clustering methodology based on a bipartite network modeling of large-scale ordinal product ratings. Our method extends existing co-clustering methods by incorporating covariates and ordinal ratings in the model-based co-clustering of a weighted bipartite network. We devise an efficient variational EM algorithm for model estimation. A simulation study demonstrates that our methodology is scalable for modeling large datasets and provides accurate estimation and clustering results. We further show that our model can successfully identify different groups of customers and products with meaningful interpretations and achieve promising predictive performance in a real application for customer targeting.

Keywords: Co-clustering; Bipartite network; Online ratings; Customer targeting; Variational EM

Determinants of Parcel Locker Adoption for Last-Mile Deliveries in Urban and Suburban Areas

By Trilce Encarnación, and Johanna Amaya📧

In Transportation Journal, 2024. https://doi.org/10.1002/tjo3.12031

This study investigates the factors influencing consumers’ adoption of parcel lockers for last-mile deliveries. Attitudinal data on parcel locker adoption were collected from individuals in the United States, and structural equation models were estimated to assess consumers’ intention to use the lockers as a delivery method. The study is grounded in the Technology Acceptance Model, evaluating perceived ease of use, usefulness, risks, and location preferences. Consumer attitudes indicate that parcel lockers are perceived as easy to use and useful, offering efficient and fast delivery times. Moreover, consumers generally regard these facilities as safe. However, perceived risks include concerns about privacy and the potential loss of packages. The results show that urban residents are less likely to use parcel lockers than suburban residents. Additionally, consumers show a preference for parcel lockers located near their workplaces. The modeling results provide critical implications for service providers, highlighting the features that users value when considering this delivery alternative, thereby aiding in evaluating potential parcel locker implementations.

Keywords: Delivery lockers; Freight transportation; Last-mile deliveries; Parcel lockers

Case – Three Mountain Communications: Fairness Considerations for Workplace Task Allocation

By Wei Wu, Rashmi Sharma,📧 and Saurabh Bansal📧 

In INFORMS Transactions on Education, 2024, 0(0). https://doi.org/10.1287/ited.2023.0044ca

The case study has two primary objectives: incorporating fairness considerations in managerial decision making and introducing task allocation using an optimization model. The case focuses on a call center that faces the issue of daily task allocation. The current self-selection-based allocation protocol has created friction among employees. To enable employee satisfaction and boost team morale, the call center needs to develop an allocation model that is perceived to be fair by employees. The case tends to elicit enthusiastic participation from students, especially on the theme of workplace fairness and the role of gender and individual constraints on employees’ ability to excel at the workplace. Overall, students believed the case was challenging and that it provided them skills to combine (i) optimization modeling and (ii) qualitative considerations for workplace fairness that they studied in management and leadership courses.

Keywords: Task allocation; Fairness; Integer programs

Bioeconomy Bright Spots, Challenges, and Key Factors Going Forward: Perceptions of Bioeconomy Stakeholders

By Evelyn Thomchick📧, Michael Jacobson, and Kusumal Ruamsook📧

In EFB Bioeconomy Journal, Volume 4, November 2024, 100068 (available online June 8, 2024). https://doi.org/10.1016/j.bioeco.2024.100068

The bioeconomy is a complex system involving a plethora of bio-products and several groups of stakeholders—such as government institutions, industry, environmental organizations, and civil society—across the bioeconomy supply chain. Successful bioeconomy activities hinge on the collective efforts and coordinated development across all involved. This study seeks to understand different stakeholder groups’ perceptions, expectations, needs, issues, goals, and constraints as related to the development of the U.S. bioeconomy, with biochar as a bioproduct of focus. Focus groups were held with a representative sample of stakeholders involved in the bioeconomy. Results show encouraging trends in increased interests and awareness, and carbon market development; while regulatory structure, production capacity and commercialization, and developing industry standards present key areas of challenges. Going forward, large-scale real-world research, commercial viability, and education are perceived to be imperative.

Keywords: Bioeconomy; Biochar; Stakeholders; Focus group study; Business development; The United States

View the full article from the publisher web site here.

Introduction to the Special Issue for INFORMS Simulation Society (I-Sim) Workshop, 2021

By Russel R. Barton📧, Marvin Nakayama, Uday V. Shanbhag, and Eunhye Song

In ACM Transactions on Modeling and Computer Simulation, 2024. 34 (2): 1–3. https://doi.org/10.1145/3655711

In summer 2021, the INFORMS Simulation Society Workshop was held virtually and was jointly hosted by the Department of Industrial and Manufacturing Engineering (IME) and the Supply Chain and Information Systems (SCIS) Department at the Pennsylvania State University. The workshop was chaired by Dr. Uday V. Shanbhag (IME) and co-chaired by Dr. Russell R. Barton (SCIS), who organized the workshop together with the remainder of the following committee members: Drs. Saurabh Bansal (Pennsylvania State University), Güzin Bayraksan (Ohio State University), Henry Lam (Columbia University), Eunhye Song (Pennsylvania State University), Farzad Yousefian (Rutgers University), and Enlu Zhou (Georgia Tech). The workshop was centered around the theme of “From Data to Decision-making: Contending with Uncertainty and Non-stationarity in Simulation Theory” and was held between June 21 and June 23, 2021. This special issue was handled by an editorial team comprising Drs. Russell R. Barton, Marvin K. Nakayama, Uday V. Shanbhag, and Eunhye Song. The articles in this issue span a diverse and timely set of topics, including the application of neural networks in the context of simulation, developing novel confidence interval procedures, the analysis of contextual ranking and selection, constrained Bayesian optimization, misspecified simulation optimization frameworks, and the role of stochastic approximation (SA) in analyzing the efficiency of Nash equilibria in uncertain environments.

Keywords: Computer systems organization; Network; Network properties; Network reliability

Can Broadband Help Curb Pollution? Implications for Marginalized Communities

By Wael Jabr📧, and Suvrat Dhanorkar📧

In International Conference on Information Systems (ICIS) 2023 Proceeddings, 2023, 13. https://aisel.aisnet.org/icis2023/soc_impactIS/soc_impactIS/13

When stakeholders are unaware of information that is of relevance to them and when this information is costly to obtain (due to costs associated with information awareness, acquisition, and integration), those stakeholders suffer from information asymmetry vis-à-vis the producer and owner of this information. To overcome those conditions, mandated disclosure has been used in a number of settings, requiring the owner of information such as a chemical manufacturer to disclose the types and level of toxic releases it produces. However, without the ease of information acquisition and subsequent dissemination, such disclosure’s value remains limited. In this paper, we study broadband penetration in the United States as an enabler of the public to be better informed about manufacturers’ pollution and its implications on curbing toxic releases. Because some marginalized communities have been documented to suffer more from such toxic releases, we also study the disproportionate impact of broadband on Black communities.

A Shrinkage Approach to Improve Direct Bootstrap Resampling under Input Uncertainty

By E. Song, H. Lam, and R. R. Barton📧

In INFORMS Journal on Computing, 2024, ahead of print, published online: February 2. https://doi.org/10.1287/ijoc.2022.0044

Discrete-event simulation models generate random variates from input distributions and compute outputs according to the simulation logic. The input distributions are typically fitted to finite real-world data and thus are subject to estimation errors that can propagate to the simulation outputs: an issue commonly known as input uncertainty (IU). This paper investigates quantifying IU using the output confidence intervals (CIs) computed from bootstrap quantile estimators. The standard direct bootstrap method has overcoverage due to convolution of the simulation error and IU; however, the brute-force way of washing away the former is computationally demanding. We present two new bootstrap methods to enhance direct resampling in both statistical and computational efficiencies using shrinkage strategies to down-scale the variabilities encapsulated in the CIs. Our asymptotic analysis shows how both approaches produce tight CIs accounting for IU under limited input data and simulation effort along with the simulation sample-size requirements relative to the input data size. We demonstrate performances of the shrinkage strategies with several numerical experiments and investigate the conditions under which each method performs well. We also show advantages of nonparametric approaches over parametric bootstrap when the distribution family is misspecified and over metamodel approaches when the dimension of the distribution parameters is high.

Keywords: Bootstrap resampling; Input uncertainty; Nonparametric; Simulation; Shrinkage

Bootstrap Confidence Intervals for Simulation Output Parameters

By Russell R. Barton📧, and Luke A. Rhodes-Leader

In Proceedings of the 2023 Winter Simulation Conference (WSC), 2024, 421–432, published online: January 31. https://doi.org/10.1109/WSC60868.2023.10407467

Bootstrapping has been used to characterize the impact on discrete-event simulation output arising from input model uncertainty for thirty years. The distribution of simulation output statistics can be very non-normal, especially in simulation of heavily loaded queueing systems, and systems operating at a near optimal value of the output measure. This paper presents issues facing simulationists in using bootstrapping to provide confidence intervals for parameters related to the distribution of simulation output statistics, and identifies appropriate alternatives to the basic and percentile bootstrap methods. Both input uncertainty and ordinary output analysis settings are included.

Keywords: Uncertainty; Load modeling