Artículos de investigación
Recepción: 31 Mayo 2022
Aprobación: 16 Septiembre 2022
DOI: https://doi.org/10.5514/rmac.v48.i2.84462
Abstract: The present study examined the effects of different levels of instruction completeness on the behavioral persistence of humans engaged in a computer task. Five undergraduate students responded in a threecomponent multiple schedule of reinforcement during baseline. In two components, responses produced points according to a fixedinterval (FI) 5 s schedule of reinforcement. In the third component, extinction was programmed, and no instruction was available (NoInstruction Component - NI). The complete instruction “Press once every 5 seconds to earn points” appeared on the computer screen during one FI component (Complete-Instruction Component – CI), and the minimal instruction “Press to earn points” appeared on the computer screen during the other FI component (Minimal-Instruction Component – MI). The reinforcement rate was equal between the FI components. Increases in response force disturbed responding during test relative to baseline. Overall, greater persistence occurred in the component correlated with the complete instruction (i.e., CI component), suggesting that different levels of instruction completeness can differentially affect behavioral persistence when the reinforcement rate is held constant.
Keywords: instructions, within-subject design, behavioral momentum, response-effort, humans.
Resumen:
Se evaluaron los efectos de distintos niveles de amplitud de la instrucción sobre la persistencia conductual en humanos. Empleando una tarea computarizada, cinco estudiantes de licenciatura respondieron de acuerdo a un programa múltiple con tres componentes durante la línea base. En dos componentes, con tasas de reforzamiento igualadas, las respuestas de los participantes producían puntos de acuerdo a un programa de intervalo fijo 5s (IF 5s). Mientras que en un tercer componente las respuestas no tuvieron consecuencias programadas, y no se proporcionó instrucción alguna (Componente Sin Instrucción; CSI). Durante los componentes correlacionados a la entrega de puntos, se presentaron dos tipos de instrucciones—Componentes Mínima y Completa, respectivamente—en el monitor de la computadora. La instrucción completa le indicaba al participante lo siguiente: “Presiona cada 5 segundos para obtener puntos”; mientras que la instrucción mínima indicaba “Presiona para obtener puntos”. Incrementos en la fuerza de la respuesta modificaron el responder durante la prueba respecto a la Línea Base. De manera general, se observó una mayor persistencia en el responder durante la presentación del componente de instrucción completa. Mantener las tasas de reforzamiento constantes, y variar los niveles de amplitud de la instrucción, pareciera sugerir que afectan de manera diferencial la persistencia del responder.
Se evaluaron los efectos de distintos niveles de amplitud de la instrucción sobre la persistencia conductual en humanos. Empleando una tarea computarizada, cinco estudiantes de licenciatura respondieron de acuerdo a un programa múltiple con tres componentes durante la línea base. En dos componentes, con tasas de reforzamiento igualadas, las respuestas de los participantes producían puntos de acuerdo a un programa de intervalo fijo 5s (IF 5s). Mientras que en un tercer componente las respuestas no tuvieron consecuencias programadas, y no se proporcionó instrucción alguna (Componente Sin Instrucción; CSI). Durante los componentes correlacionados a la entrega de puntos, se presentaron dos tipos de instrucciones—Componentes Mínima y Completa, respectivamente—en el monitor de la computadora. La instrucción completa le indicaba al participante lo siguiente: “Presiona cada 5 segundos para obtener puntos”; mientras que la instrucción mínima indicaba “Presiona para obtener puntos”. Incrementos en la fuerza de la respuesta modificaron el responder durante la prueba respecto a la Línea Base. De manera general, se observó una mayor persistencia en el responder durante la presentación del componente de instrucción completa. Mantener las tasas de reforzamiento constantes, y variar los niveles de amplitud de la instrucción, pareciera sugerir que afectan de manera diferencial la persistencia del responder.
Palabras clave: instrucciones, diseño intra-sujeto, momentum conductual, respuesta-esfuerzo, humanos.
Introduction
The concept of response strength has been extensively discussed in the behavior-analytic literature. For instance, Skinner (1938) argued that response strength was directly proportional to response rate. A few decades later, Nevin (1974, 1979) stated that the strength of operants might be better understood by examining the tendency for an operant to continue to occur following the addition of disruptive events – i.e., also referred to as persistence and resistance to change. Recently, resistance to change has been investigated under the concept of Behavioral Momentum (Luiz et al., 2019; Nevin, 2015; Nevin et al., 1983). Under a Behavioral Momentum framework, the resistance to change of several species (e.g., pigeons, rats, fish, and humans) is often studied as function of the reinforcement rate or magnitude.
Most studies on resistance to change have both their experimental designs and results linked to the Behavioral Momentum framework (Nevin, 2015; Podlesnik & DeLeon, 2015). In this framework, persistence is often examined under multiple variable interval (VI) schedules of reinforcement and often measured as proportional changes from baseline (BL) in the face of disruptive events (Nevin & Wacker, 2013; Podlesnik & DeLeon, 2015) such as extinction (EXT, e.g., Cohen et al., 1993; Craig et al., 2019), prefeeding (e.g., Aló et al., 2015; Cohen et al., 1993), and concurrent tasks (e.g., Cohen, 1996; Mace et al., 1990; Podlesnik & Chase, 2006). When the VI components of the multiple schedule have different reinforcement rate or magnitude, greater persistence is often observed in the richer VI component (Igaki & Sakagami, 2004; Mace et al., 1990, Experiment 1; Nevin, 1974, Experiments 1 to 3). For instance, Nevin (1974, Experiment 1) exposed pigeons to a multiple VI 60 s VI 180 s schedule of reinforcement during BL. Response-independent food was used as a disruptive event during test. Greater persistence occurred in the component correlated with a higher reinforcement rate (i.e., VI 60 s). Similar results were obtained by Igaki and Sakagami (2004) with goldfish as subjects, as well as by Cohen (1996) and Mace et al. (1990) with humans who had a typical and atypical development, respectively.
When the VI schedules have the same reinforcement rate and magnitude, other factors seem to affect persistence, such as the immediacy of reinforcement (Bell, 1999; Grace et al., 1998a), relative differences in response-rate requirements (Kuroda et al., 2018; Lattal, 1989; Nevin, 1974, Experiment 5), and response force (Luiz et al., 2020, 2021). For instance, Luiz and colleagues (2020, 2021) examined behavioral persistence as a function of different response force requirements. In both studies, college students were exposed to a multiple VI VI schedule with the same reinforcement rate and magnitude, and two levels of response force were required to respond: 10 N in the low-force component and 50 N in the high-force component. In the first study, extinction alone or extinction plus anagrams served as disruptive events during test. In the second study, only extinction was used during test. With few exceptions, both studies showed that greater persistence occurred in the component correlated with the lower response-force requirement (i.e., 10 N).
In sum, multiple aspects can affect the extent to which an organism will persist in a specific course of action. When there are differences in reinforcement rate or magnitude, greater persistence is often associated with the richer condition. When there are no differences in reinforcer parameters, other contingencies may take place and affect behavioral persistence differentially. Data from behavioral persistence studies designed according to the Behavioral Momentum framework are diverse but do not exhaust all contingencies that organisms may be exposed to, especially when it comes to humans. For instance, little attention has been given to the role of verbal control on behavioral persistence (seeTrump et al., 2021). To our knowledge, Podlesnik and Chase (2006) is the only study that investigated the influence of verbal control on resistance to change through a Behavioral Momentum framework (for a review of how instructions can affect behavior sensitivity under other frameworks, see Kissi et al., 2020).
In Podlesnik and Chase (2006), two groups of participants responded to a VI 30 s schedule of reinforcement during BL. One group received minimal instructions, and another group received complete instructions regarding the contingency that was in effect. A video presentation disrupted responding during the test. Proportional changes in response rates during the test relative to BL conditions measured behavioral persistence. Their results showed that response rates in the complete instruction group were more persistent during the test than response rates in the incomplete instruction group.
Several methodological modifications in the Podlesnik and Chase (2006) study may further contribute to a better understanding of the role of instructional control on behavioral persistence. For example, Cohen (1998) argued the importance of investigating behavioral persistence within the same session or separated by a relatively short time. Nevin (1979) stated that using multiple schedules of reinforcement is a convenient way to compare two performances under identical conditions and then examine persistence when contingencies are changed. In contrast, Podlesnik and Chase (2006) adopted a between-subjects design and used a single schedule of reinforcement. Podlesnik and Chase’s method could be refined by using a within-subjects design, thereby increasing the findings’ reliability and generality (Perone & Hursh, 2013; Sidman, 1960) and evaluating the effects of disruptive operations on multiple contingencies in the same experimental session (Craig et al., 2019).
The present study aimed at bridging this gap by examining the effects of different levels of instruction’s completeness on behavioral persistence, expanding the findings of Podlesnik and Chase (2006). Participants responded to a multiple schedule of reinforcement during a BL, followed by a test condition in a within-subjects design. Instructions were manipulated in their completeness and increases in response force were used as the disruptive event.
Method
Participants
Undergraduate students (four women, one man), 19 to 24 years of age, participated in the study. The invitation informed participants they would participate in a study about human behavior and spend approximately 24-min during each laboratory visit, 3 to 5 times per week. Participants were debriefed about the experiment’s goals at the end of the last experimental session. The Local Committee for Ethical Research approved all procedures that were performed with the participants (protocol no. 1.518.399).
Apparatus
Sessions were conducted in a 3 m² room that contained a desk, two chairs, a desktop computer, a 17-inch color monitor, a keyboard, and a mouse. White noise was reproduced through headphones connected to the computer to mask extraneous sounds. ProgRef v4 software (Becker, 2011) executed the experimental task and recorded the participants’ responses. Stability Check software (Costa & Cançado, 2012) was used to calculate response-rate stability. A spring-loaded button (Figure 1), consisting of a 13 cm² nylon box, was placed on the table in front of the participant, serving as the operandum in the test condition (Lacerda et al., 2022; Luiz et al., 2020, 2021). Spring force was measured using Hooke’s law (see Aranha et al., 2016).
Figure 1. AutoCAD drawings

Note. Side view (A), top view (B), and AutoCAD drawing of the inside of the spring button (C
Procedure
Before the first session, the participants read and signed an informed consent form that described the number and duration of sessions and that every 100 points gained would be exchanged for R$0.05 (Brazilian Real; approximately US$0.01) at the end of each session. The participants were then asked to deposit personal belongings (e.g., watches and cell phones) outside the experimental room and read the following instructions (translated from Portuguese):
“This research is not about intelligence or personality. Your goal is to earn as many points as you can using only the mouse. You should click the button at the bottom center of the screen to start the session. An instruction describing how you should respond to gain as many points as possible will appear in the upper left corner. A smiling face will appear in the upper right corner of the screen when points are available. You should click on the consummatory response button above the smiling face to add points to the counter in the center of the screen and make the smiling face disappear. Points will not be added if you do not click on the consummatory response button. The experimenter is not allowed to give you any additional information. If you have any questions, please read this text again and continue the experiment. Good job!”
Figure 2 shows the computer screen layout in each component during the experimental sessions, which consisted of a gray background with a 10.0 cm ? 2.0 cm response button in the screen’s lower center and a 4.5 cm ? 0.5 cm consummatory response button in the upper right corner of the screen. The color of the response button changed depending on the component of the multiple schedule of reinforcement. A rectangle of the same color as the response button in the upper left corner of the screen presented the instructions.
Figure 2. Screen layout for each component

Note. Computer screen layout in each component during the experimental sessions, showing the (A) Minimal Instruction Component (MI), (B) No Instruction Component (NI), and (C) Complete Instruction Component (CI).
The instructions varied according to the component. Above the response button, an 8.0 cm ? 2.9 cm point counter (white on a black background) displayed the number of points earned in each session. Responses were defined as pressing the left mouse button or the spring-loaded button during the BL and test conditions, respectively. Once a response met the contingency, an image of a smiling face appeared below the consummatory response button, signaling the availability of one reinforcer (100 points). The smiling face remained visible until the participant clicked the consummatory response button to add 100 points to the counter during the BL or press the ESC key on the keyboard during the test. At the end of each session, the screen displayed the total number of points gained and the message “Call the Experimenter.” The participants were paid for their performance at the end of each session based on the value displayed on the counter.
Phase 1: BL. All participants were exposed to 23-min sessions consisting of a three-component multiple schedule of reinforcement separated by a 20-s inter-component interval (ICI). Each component was presented for 3-min periods. The entire screen was black during the ICI, while “WAIT!” was printed in red in the center of the screen. While waiting for the next component, the participants were asked to answer the following multiple-choice question that was written on paper (cf. Madden and Perone, 1999): “What instruction appeared on the previous screen?” The three response options were the following: (1) “Press once every 5 seconds to earn points,” (2) “Press to earn points,” and (3) “_______.” This multiple-choice question was used to ensure the participants had read the instructions presented in each component. All of the participants answered this question correctly. Table 1 shows the instructions and schedules of reinforcement in each component during the BL and Test.
Table 1. Components, instructions, and schedules of reinforcement in each component during all phases.

Note. FI = Fixed Interval; EXT = Extinction.
Responses produced reinforcers according to a fixed-interval (FI) 5 s schedule of reinforcement during both the complete-instruction (CI) and minimal-instruction (MI) components. However, although the CI component fully described the contingency, the MI component did not specify when or how participants should respond. The no-instruction (NI) component displayed no instructions, and responses did not produce points. An NI component always separated the CI and MI components to minimize possible interactions between them. Responses occurring under the control of FI 5 s – and any other responses – produced no reinforcers in the NI Component (i.e., Extinction, EXT). Thus, it created a condition in which the behavior under the control of one instruction level could be (hypothetically) extinguished before the participant made contact with the other instruction level. As shown in Table A1, the participants had no or few responses during the NI component, and only when they were exposed to the CI or MI components did the response rate return to high levels. The components were distributed in the following order: CI – NI – MI – NI – CI – NI – MI or MI – NI – CI – NI – MI – NI – CI. The first component was always MI for Participant 1 (P1), P3, P5, and CI for P2 and P4.
The BL was run up to a maximum of 10 sessions or until differences in mean response rates of the last two and previous two sessions, divided by the mean of these four sessions, were ≤ 15% in both components (cf. Costa & Cançado, 2012; Cumming and Schoenfeld, 1960). Stability Check software was used to calculate differences in mean response rates. Table 2 shows the number of sessions of each participant in BL and Test sessions.
Phase 2: Test. The Test was like BL sessions, except that the springloaded button was substituted for the mouse in all components simultaneously, increasing the physical force required to respond. The mouse was ineffective, and the consummatory response button consisted of pressing the ESC key on the keyboard. For all participants but P5, the spring-loaded button required two forces to respond: 56 and 76 N in the first two and the second two test sessions, respectively. Participant P5 responded only with the 56 N force during all test sessions due to a problem with the 76 N spring. The Test lasted for four sessions, except for P5, which lasted three sessions.
Results
Table 2 shows the mean reinforcement rate (per min) of all BL and Test sessions during the CI and MI components and the number of sessions in each phase (Table A1 in the Appendix shows reinforcement and response rates in each session). Figure 3 shows mean response rates on a logarithmic scale and standard deviation during the last four BL sessions and all test sessions for each participant in both the CI component (gray bars) and MI component (black bars; Table A1 in the Appendix shows response rates in each session during all components).
Table 2. Mean reinforcement rate (per min) of all BL and Test sessions during the CI and MI components and number of sessions in each phase

Note. BL = Baseline; CI = Complete Instruction Component; MI = Minimal Instruction Component; FI = Fixed Interval; EXT = Extinction.
Response rates were similar between the CI and MI components, except for P5 during BL. During the test, response rates were lower than in BL for three of five participants. The results in Table 2 suggest that the addition of the spring-loaded button (and the increase in physical force that was required to respond) did not reliably affect the mean reinforcement rate during the test relative to BL. The results in Figure 3 suggest that the addition of the spring-loaded button similarly disrupted the participants’ performance in both components by reducing the mean response rate, with few exceptions (P4 and P5 during MI and CI components, respectively).
Figure 3. Mean response rates during BL and Test sessions

Note. Mean response rates on a logarithmic scale and standard deviations during the last four BL sessions and all test sessions for each participant in both the CI component (gray bars) and MI component (black bars).
Figure 4 shows response rates during the test sessions as a proportion of mean response rates during the last four BL sessions for each participant in both the CI and MI components and the mean relative response rates of all participants during the test sessions in both components. The horizontal line at 1.0 indicates mean BL response rates. Greater deviations from 1.0 indicate greater changes in responding during the test relative to BL, suggesting a lower persistence of responding.
Figure 4. Proportion of mean response rates during test sessions

Note. Response rates during the test sessions as a proportion of mean response rates during the last four BL sessions for each participant in both the CI and MI components and mean response rates of all participants during the test session in both components. The force that was required to press the spring button is indicated in each graph. Note the different Y-axis scales among the graphs.
Considering the proximity to the horizontal line as the measure of persistence, it is observed that during the first Test session, resistance to change was equal between CI and MI Components for P1 and P3. For P2, responding changed less in the MI component during the first Test session. Responses for P1 and P2 changed less in the CI component than in the MI component during the second, third, and fourth sessions. For P3, responding changed less in the CI component during the second and fourth test sessions and was similar between components during the first and third test sessions. For P4, responding decreased during the CI component and increased during the MI component. During the first, third, and fourth test sessions, responses in the CI component changed less than in the MI component (i.e., greater deviations from 1.0 indicate greater changes in responding during the Test relative to BL). For P5, responses in the CI component changed less than in the MI component during all three test sessions. Overall, four of five participants (P1, P2, P3, and P5) persisted more in the CI component than in the MI component, and one participant (P4) had mixed results. The mean response rates of each participant indicated that responding during the CI component was more persistent than in the MI component during the three first test sessions, and it is worth to know that persistence was greater in the MI rather than CI Component in only one Test session among 19 sessions.
Discussion
The present experiment was designed to examine the effects of different levels of instruction completeness on the behavioral persistence of humans engaged in a computer task. To accomplish that, we refined Podlesnik and Chase’s (2006) procedure by making it more similar to studies conducted within a Behavioral Momentum framework. Thus, our participants were exposed to a multiple schedule of reinforcement with different levels of instruction completeness in the components, and their performances were compared in a within-subject design. Overall, greater persistence occurred in the component correlated with the complete instruction (i.e., CI component), corroborating the results obtained by Podlesnik and Chase.
The present experiment belongs to a second group of behavioral persistence studies in which reinforcer rate and magnitude are held constant, and other variables are manipulated (e.g., Grace et al., 1998b; Kuroda & Lattal, 2018; Luiz et al., 2021). In such studies, differential behavioral persistence occurs by manipulating the immediacy of reinforcement, response rate requirements, and response force requirements. Combined with those obtained by Podlesnik and Chase (2006), our results shed light on the effects of different levels of instruction completeness on behavioral persistence while reinforcer rate and magnitude are held constant. In an extension of the Podlesnik and Chase experiment, the present study examined these effects using a withinsubjects design and exposed the participants to multiple sources of control in the same session using multiple schedules of reinforcement.
It is important to extend the findings’ reliability and generality (Perone & Hursh, 2013; Sidman, 1960) into the behavior-analytic field and advance the development of knowledge about an important variable that controls human behavior in several contexts and, yet, has not been received much attention under the Behavioral Momentum framework. Galizio (1979) stated that reinforcement history determines how instructions will control human behavior. However, in the present study, the reinforcement rate between the components was very similar and, thus, might not be responsible for the differential resistance to change. Nevertheless, verbal stimuli, such as instructions (see DeGrandpre & Buskist, 1991), play an important role in stimulus control and play a part in the contingencies humans are exposed to. Different contingencies (including verbal stimuli) can produce different behaviors. Thus, instruction completeness may affect the control by different antecedent stimuli, hence affecting human behavior differently.
Although our results extend those Podlesnik and Chase (2006) obtained, some limitations should be considered. For instance, using an AB design did not allow us to examine how the participants would respond if they returned to the BL conditions. Additionally, our experiment did not evaluate how the participants would respond in a FI schedule with no instructions to possibly distinguish the effects of FI contingencies from the instructions. As an attempt to increase the knowledge about the effects of instructions on behavioral persistence, future studies could use more than two levels of instruction’s completeness and examine the effects of these instructions in the presence and absence of the instruction giver (e.g., Donadeli & Strapasson, 2015; Kroger-Costa & Abreu-Rodrigues, 2012; Ramos et al., 2015) which could approximate the experimental condition to everyday situations in which people are faced by multiple verbal prompts and can have their behaviors observed by others. Furthermore, as it occurs with the immediacy of reinforcement (Bell, 1999; Grace et al., 1998a), the relative differences in response-rate requirements (Kuroda et al., 2018; Lattal, 1989; Nevin, 1974, Experiment 5), and the response force (Luiz, et al., 2020, 2021) our results demonstrate that different levels of instruction completeness can affect behavioral persistence when the reinforcement rate is the same between the components.
Acknowledgments
The present work was conducted by the first author in partial fulfillment of the requirements of master’s degree in Behavior Analysis at State University of Londrina.
Compliance with Ethical Standards
Conflict of Interest. The authors reported no potential conflict of interest.
Funding. The third author was CNPq Research Productivity Fellowship during this work (PQ2, Process: 311170/2016-1).
Ethical Approval. The Committee for Ethical Human Research of the Universidade Estadual de Londrina, Londrina-PR, Brazil, approved all procedures performed with the participants.
Availability of data and material. All data and materials are available from the corresponding author upon request.
References
Aló, R. M., Abreu-Rodrigues, J., Souza, A. S., & Cançado, C. R. X. (2015). The Persistence of Fixed-Ratio and DifferentialReinforcement-of-Low-Rate Schedule Performances. Mexican Journal of Behavior Analysis, 41(1), 3–31.
Aranha, N., Oliveira Jr, J. M. de, Bellio, L. O., & Bonventi Jr, W. (2016). A lei de Hooke e as molas não-lineares, um estudo de caso. Revista Brasileira de Ensino de Física, 38(4). https://doi.org/10.1590/1806-9126-RBEF-2016-0102
Becker, R. M. (2011). ProgRef V4: Um software para coleta de dados em programas de reforço com humanos [Universidade Estadual de Londrina]. http://www.bibliotecadigital.uel.br/document/?code=vtls000167971
Bell, M. C. (1999). Pavlovian Contingencies and Resistance to Change in a Multiple Schedule. Journal of the Experimental Analysis of Behavior, 72(1), 81–96. https://doi.org/10.1901/jeab.1999.72-81
Cohen, S. L. (1996). Behavioral momentum of typing behavior in college students. Journal of Behavior Analysis and Therapy, 1, 36–51.
Cohen, S. L. (1998). Behavioral momentum: the effects of the temporal separation of rates of reinforcement. Journal of the Experimental Analysis of Behavior, 69(1), 29–47. https://doi.org/10.1901/jeab.1998.69-29
Cohen, S. L., Riley, D. S., & Weigle, P. A. (1993). Tests of behavior momentum in simple and multiple schedules with rats and pigeons. Journal of the Experimental Analysis of Behavior, 60(2), 255–291. https://doi.org/10.1901/jeab.1993.60-255
Costa, C., & Cançado, C. (2012). Stability Check: a program for calculating the stability of behavior. Mexican Journal of Behavior Analysis, 38(1), 61–71. https://www.redalyc.org/pdf/593/59335804005.pdf
Craig, A. R., Sweeney, M. M., & Shahan, T. A. (2019). Behavioral momentum and resistance to extinction across repeated extinction tests. Journal of the Experimental Analysis of Behavior, 112(3), 290–309. https://doi.org/10.1002/jeab.557
Cumming, W. W., & Schoenfeld, W. N. (1960). Behavior stability under extended exposure to a time-correlated reinforcement contingency. Journal of the Experimental Analysis of Behavior, 3(1), 71–82. https://doi.org/10.1901/jeab.1960.3-71
DeGrandpre, R. J., & Buskist, W. F. (1991). Effects of Accuracy of Instructions on Human Behavior: Correspondence with Reinforcement Contingencies Matters. The Psychological Record, 41(3), 371–384. https://doi.org/10.1007/BF03395119
Donadeli, J. M., & Strapasson, B. A. (2015). Effects of Monitoring and Social Reprimands on Instruction-Following in Undergraduate Students. The Psychological Record, 65(1), 177–188. https://doi.org/10.1007/s40732-014-0099-7
Galizio, M. (1979). Contingency-shaped and rule-governed behavior: instructional control of human loss avoidance. Journal of the Experimental Analysis of Behavior, 31(1), 53–70. https://doi.org/10.1901/jeab.1979.31-53
Grace, R. C., Schwendiman, J. W., & Nevin, J. A. (1998a). Effects of unsignaled delay of reinforcement on preference and resistance to change. Journal of the Experimental Analysis of Behavior, 69(3), 247–261. https://doi.org/10.1901/jeab.1998.69-247
Grace, R. C., Schwendiman, J. W., & Nevin, J. A. (1998b). Effects of unsignaled delay of reinforcement on preference and resistance to change. Journal of the Experimental Analysis of Behavior, 69(3),
Igaki, T., & Sakagami, T. (2004). Resistance to change in goldfish. Behavioural Processes, 66(2), 139–152. https://doi.org/10.1016/j.beproc.2004.01.009
Kissi, A., Harte, C., Hughes, S., De Houwer, J., & Crombez, G. (2020). The rule-based insensitivity effect: a systematic review. PeerJ, 8, e9496. https://doi.org/10.7717/peerj.9496
Kroger-Costa, A., & Abreu-Rodrigues, J. (2012). Effects of historical and social variables on instruction following. The Psychological Record, 62(4), 691–706. https://doi.org/10.1007/BF03395829
Kuroda, T., Cook, J. E., & Lattal, K. A. (2018). Baseline response rates affect resistance to change. Journal of the Experimental Analysis of Behavior, 109(1), 164–175. https://doi.org/10.1002/jeab.285
Kuroda, T., & Lattal, K. A. (2018). Behavioral control by the responsereinforcer correlation. Journal of the Experimental Analysis of Behavior, 110(2), 185–200. https://doi.org/10.1002/jeab.461
Lacerda, R., Luiz, A., & Costa, C. (2022). Technical Report: a relatively low-cost equipment to investigate physical effort in humans. Experimental Analysis of Human Behavior Bulletin, 33(1), 1–7.
Lattal, K. A. (1989). Contingencies on response rate and resistance to change. Learning and Motivation, 20(2), 191–203. https://doi.org/10.1016/0023-9690(89)90017-9
Luiz, A., Costa, C. E., Banaco, R. A., & Tsutsumi, M. M. A. (2021). Effects of different physical-effort requirements on resistance to extinction in humans. European Journal of Behavior Analysis, 23(1), 30–41. https://doi.org/10.1080/15021149.2021.1932344
Luiz, A., Costa, C. E., & Cançado, C. R. X. (2019). Aspectos Históricos, Teóricos e Metodológicos da Teoria do Momentum Comportamental. Perspectivas Em Análise Do Comportamento, 10(1), 129–146. https://doi.org/10.18761/pac.tac.2019.007
Luiz, A., Costa, C. E., dos Santos, J. R., & Tsutsumi, M. M. A. (2020). Resistance to change as function of different physical-effort requirements in humans. Behavioural Processes, 176(February), 104123. https://doi.org/10.1016/j.beproc.2020.104123
Mace, F. C., Lalli, J. S., Shea, M. C., Lalli, E. P., West, B. J., Roberts, M., & Nevin, J. A. (1990). The momentum of human behavior in a natural setting. Journal of the Experimental Analysis of Behavior, 54(3), 163–172. https://doi.org/10.1901/jeab.1990.54-163
Madden, G. J., & Perone, M. (1999). Human sensitivity to concurrent schedules of reinforcement: effects of observing schedulecorrelated stimuli. Journal of the Experimental Analysis of Behavior, 71(3), 303–318. https://doi.org/10.1901/jeab.1999.71-303
Nevin, J. A. (1974). Response strength in multiple schedules. Journal of the Experimental Analysis of Behavior, 21(3), 389–408. https://doi.org/10.1901/jeab.1974.21-389
Nevin, J. A. (1979). Reinforcement schedules and response strength. In M. D. Zeiler & P. Harzem (Eds.), Reinforcement and the organization of behavior (Volume 1, pp. 117–158). John Wiley & Sons.
Nevin, J. A. (2015). Behavioral Momentum: a Scientific Metaphor. The Tisbury Printer.
Nevin, J. A., Mandell, C., & Atak, J. R. (1983). The analysis of behavioral momentum. Journal of the Experimental Analysis of Behavior, 39(1), 49–59. https://doi.org/10.1901/jeab.1983.39-49
Nevin, J. A., & Wacker, D. P. (2013). Response strength and persistence. In G. J. Madden, W. V. Dube, T. D. Hackenberg, G. P. Hanley, & K. A. Lattal (Eds.), APA handbook of behavior analysis, Vol. 2: Translating principles into practice. (Vol. 2, pp. 109–128). American Psychological Association. https://doi.org/10.1037/13938-005
Perone, M., & Hursh, D. E. (2013). Single-case experimental designs. In APA handbook of behavior analysis, Vol. 1: Methods and principles. (pp. 107–126). American Psychological Association. https://doi.org/10.1037/13937-005
Podlesnik, C. A., & Chase, P. N. (2006). Sensitivity and strength: effects of instructions on resistance to change. The Psychological Record, 56(2), 303–320. https://doi.org/10.1007/BF03395552
Podlesnik, C. A., & DeLeon, I. G. (2015). Behavioral Momentum Theory: Understanding Persistence and Improving Treatment. In F. D. DiGennaro Reed & D. . Reed (Eds.), Bridging the Gap Between Science and Practice in Autism Service (Issue March 2016, pp. 327–351). Springer. https://doi.org/10.1007/978-1-4939-2656-5_12
Ramos, M. N., Costa, C. E., Benvenuti, M. F., & Andrade, C. C. F. (2015). Efeito de Regras Inacuradas e Monitoramento sobre Desempenhos em Programas de Reforços. Psicologia: Reflexão e Crítica, 28(4), 813–822. https://doi.org/10.1590/1678-7153.201528420
Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. Basic Books.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. Appleton-Century-Crofts.
Trump, C. E., Herrod, J. L., Ayres, K. M., Ringdahl, J. E., & Best, L. (2021). Behavior Momentum Theory and Humans: A Review of the Literature. Psychological Record, 71(1), 71–83. https://doi.org/10.1007/s40732-020-00430-1
Table A1
Response rate (per min) in each component during BL and Test sessions.

