Adaptive Purchase Tasks in the Operant Demand Framework
Gilroy, Shawn P., Rzeszutek, Mark J., Koffarnus, Mikhail N., Reed, Derek D., and Hursh, Steven R. (2024)
Abstract:
Various avenues exist for quantifying the effects of reinforcers on behavior. Numerous nonlinear models derived from the framework of Hursh and Silberburg (2008) are often applied to elucidate key metrics in the operant demand framework (e.g., Q0, PMAX), with each approach presenting with respective strengths and tradeoffs. This work introduces and demonstrates an adaptive task capable of elucidating key features of operant demand without relying on nonlinear regression (i.e., a targeted form of empirical PMAX). An adaptive algorithm based on Reinforcement Learning is used to systematically guide questioning in the search for participant-level estimates related to peak work (e.g., PMAX) and this algorithm was evaluated across four varying iteration lengths (i.e., five, ten, fifteen, and twenty sequentially updated questions). Equivalence testing with simulated agent responses revealed that tasks with five or more sequentially updated questions recovered PMAX values statistically equivalent to seeded PMAX values, which provided evidence suggesting that quantitative modeling (i.e., nonlinear regression) may not be necessary to reveal valuable features of reinforcer consumption and how consumption scales as a function of price. Discussions are presented regarding extensions of contemporary hypothetical purchase tasks and strategies for extracting and comparing critical aspects of consumer demand.
Citation: Gilroy, Shawn P., Rzeszutek, Mark J., Koffarnus, Mikhail N., Reed, Derek D., and Hursh, Steven R. (2024). Adaptive Purchase Tasks in the Operant Demand Framework. undefined, undefined. undefined