Person: Rana, Arpit
Loading...
Name
Arpit Rana
Job Title
Faculty
Email Address
Telephone
079-68261687
Birth Date
Specialization
Applied Machine Learning, Recommendation Systems, Multimodality, and their applications in Digital Innovation and Transformation
Abstract
Biography
Dr. Arpit Rana did his Ph.D. from University College Cork, Ireland, in 2020. Before joining DA-IICT, He worked as a Postdoctoral Researcher at the Department of Industrial Engineering, University of Toronto (U of T), Canada.
Research Projects
Organizational Units
Name
2 results
Search Results
Now showing 1 - 2 of 2
Publication Metadata only User Experience and the Role of Personalization in Critiquing-Based Conversational Recommendation(ACM, 08-10-2024) Sanner, Scott; Bouadjenek, Mohamed Reda; Carlantonio, Ronald Di; Farmaner, Gary; Rana, Arpit; DA-IICT, GandhinagarCritiquing�where users propose directional preferences to attribute values�has historically been a highly popular method for conversational recommendation. However, with the growing size of catalogs and item attributes, it becomes increasingly difficult and time-consuming to express all of one�s constraints and preferences in the form of critiquing. It is found to be even more confusing in case of critiquing failures: when the system returns no matching items in response to user critiques. To this end, it would seem important to combine a critiquing-based conversational system with a personalized recommendation component to capture implicit user preferences and thus reduce the user�s burden of providing explicit critiques. To examine the impact of such personalization on critiquing, this article reports on a user study with 228 participants to understand user critiquing behavior for two different recommendation algorithms: (i)�non-personalized, that recommends any item consistent with the user critiques; and (ii)�personalized, which leverages a user�s past preferences on top of user critiques. In the study, we ask users to find a restaurant that they think is the most suitable to a given scenario by critiquing the recommended restaurants at each round of the conversation on the dimensions of price, cuisine, category, and distance. We observe that the�non-personalized�recommender leads to more critiquing interactions, more severe critiquing failures, overall more time for users to express their preferences, and longer dialogs to find their item of interest. We also observe that�non-personalized�users were less satisfied with the system�s performance. They find its recommendations less relevant, more unexpected, and somewhat equally diverse and surprising than those of�personalized�ones. The results of our user study highlight an imperative for further research on the integration of the two complementary components of�personalization�and�critiquing�to achieve the best overall user experience in future critiquing-based conversational recommender systems.Publication Metadata only Extended recommendation-by-explanation(Springer, 01-04-2022) D’Addio, Rafael M; Manzato, Marcelo G; Bridge, Derek; Rana, Arpit; DA-IICT, GandhinagarStudies have shown that there is an intimate connection between the process of computing recommendations and the process of generating corresponding explanations and that this close relationship may lead to better recommendations for the user. However, to date, most recommendation explanations are post hoc rationalizations; in other words, computing recommendations and generating corresponding explanations are two separate and sequential processes. There is, however, recent work�unifies�recommendation and explanation, using an approach that is called Recommendation-by-Explanation (r-by-e). In�r-by-e, the system constructs an explanation, a chain of items from the user�s profile, for each candidate item; then, it recommends those candidate items that have the best explanations. However, the way it constructs and selects chains is relatively simple, and it considers only one way of representing item�s elements�in terms of their features. In this article, we extend�r-by-e. We present a number of different ways of generating chains from a user�s profile. These methods mainly differ in their item representations (i.e. whether using item elements as features or neighbours) and in the weighting schemes that they use to generate the chains. We also explore�r-by-e�s approach to chain selection, allowing the system to choose whether to cover more aspects of the candidate item or the user profile. We compare the extended versions with corresponding classic content-based methods on two datasets that mainly differ on their item feature sets. We find that the versions of�r-by-e�that make explicit use of item features have several advantages over the ones that use neighbours, and the empirical comparison shows that one of these versions�the one that assigns weights to the item features based on their importance to that item�is also the best in terms of recommendation accuracy, diversity, and surprise, while still generating chains whose lengths are manageable enough to be interpretable by users. It also obtains the best survey responses for its recommendations and corresponding explanations in a trial with real users.
