The usual partial reinforcement extinction effect, demonstrating greater resistance to extinction after intermittent than after continuous reinforcement, seems to contradict this generalization. Special foraging adaptations -- pt. The above strategies do not use full information for estimating the number o. On each trial the jays could choose between two patches: 1 a non-depleting patch with a low, uniform prey density, and 2 a depleting patch that had a high initial prey density but depleted in a single step later in the session. Description: 1 online resource xxii, 291 pages : : illustrations.
Greater persistence in initially lower quality patches is consistent with the following two hypotheses about patch leaving decision rules: 1 a bird leaves a patch when the estimated interval since the last prey capture reaches some value relative to the estimated mean inter-capture interval in the environment a flexible giving-up time hypothesis or 2 a bird leaves a patch when the estimated probability of capturing a prey in the next capture attempt falls below that estimated for the whole environment a capture-probability hypothesis. Foraging in a changing environment -- pt Ill. Mean inter-capture interval and mean number of inter-capture pecks accounted for a significant amount of the variance in giving-up time in three of four, and four of four birds, respectively. This generalization does not, however, hold for discrete-trial performance. The paper ends with discussion of future research on the assembly and wider application of a foraging ecology model of consumer behaviour.
We emphasise the distinction between the mathematical procedure that can be used to find optimal solutions and the mechanism an animal might use to implement such solutions. Such behaviour does not maximize the rate of energy intake in this environment. Models that explicitly account for uncertainty in decision-making and learn40 ing, will potentially be important tools. Sensitivity was similar across conditions, for the basic shorter-longer rule, and for the more complex rule of one duration as two or four times the other. Operant and Pavlovian processes -- pt.
Aspects of optimal-foraging theory -- pt. In the first experiment, either 256 or 1024 pecks were allowed per session. Subjects often left directly after a capture, perhaps an example of the Concorde fallacy. Efficiency declined when conditions changed, then recovered. Each condition began with equal probabilities of reinforcement on 2 response keys and switched to unequal probabilities. Two areas of research, the behavioural ecology of consumption and information foraging, have made strides in the application of foraging theories in relation to consumption and related behaviours.
Relative time allocation followed changes in reinforcer distribution more closely when there were fewer changes within a session, with longer components, and at higher overall rates of reinforcement. They delayed this switch much too long. Learning and Motivation, 22, 200—225. Moreover, for substitutes, brand selection is price sensitive, suggesting both melioration and maximization; for nonsubstitutes, choice is not price sensitive but still appears consistent with maximization of price- and nonprice-related sources of value. The data agree with the predictions of a statistical decision theory model that compares the probability of capturing a prey in the current patch with the probability of capturing prey in alternative patches the capture-probability model. All models in the class share a number of ecologically important properties, but there are important differences, as well.
We show that the Bayesian algorithm can reproduce naturalistic patterns of probabilistic foraging, in simulations of an experiment in bumblebees. At the same time, optimal foraging theory has shown that the ability of animals to respond adaptively to the characteristics of their food supply can have considerable effects on foraging efficiency and therefore, presumably, on fitness Our purpose in this chapter is to show how the natural history and behavioral ecology of foraging animals can provide an excellent starting point for research on learning and memory. Responses on a left key were reinforced at variable intervals for the first 25 s since the beginning of a 50-s trial, and right-key responses were reinforced at variable intervals during the second 25 s. Such correspondence both attests to the correct identification of this source of variance and suggests ways to remove it, both from behavior and from our models of behavior. A static law, matching, has been established, but there is no consensus on the underlying dynamic process. These failures highlight our limited understanding of the role of memory in timing and hint at additional mechanisms.
As relative reinforcement rates were varied, psychometric functions showed shifts in green-key responses at all durations. These and previous results pose difficulties for some well-known models of acquisition, but the results are well described by a simple model that states that the strength of each response is independently increased by reinforcement and decreased by nonreinforcement. It is the thesis of the present analysis that the discrimination of temporal intervals is based on the discrimination of behaviors that are elicited by the direct effects of reinforcement, and that the latter comprise behavior's time. The model can also duplicate results from several other experiments on extinction after complex discrimination training. When items were encountered on variable-interval schedules, birds were more likely to accept a poor item long delay to food the longer they had just searched, as if they were averaging prey density over a short memory window Experiment 1. The importance of several measures of the precise sequence of events in individual sessions were assessed with selected averaging algorithms. The dependent variable is some change in the stance of the organism-around the chamber, toward a switch, between two switches.
For instance, acquisition of preference was faster in conditions with reinforcement probabilities of. Caught fish were not replaced and ponds varied in how many fish they initially contained according to three different distributions. The results are consistent with the predictions of the model. A signal-detection analysis showed that sensitivity remained roughly constant across conditions while response bias changed as a function of changes in relative reinforcement rate. Each of these fixed quantities has same value that optimizes the strategy concerned, given certain conditions. An unstable patch offered a higher initial reinforcement probability, which then declined unpredictably to a zero reinforcement probability in each session. However, aggregate studies of consumer choice indicate two modes of consumer brand purchase within a product category: either exclusive purchase of one brand or multibrand purchasing.
A trial-by-trial analysis of individual responses and reinforcers suggested that reinforcement had both short-term and long-term effects on choice. Incentive theory is extended to address the phenomenon of autoshaping. In three birds, recent information had a strong influence on giving-up time. Pigeons were trained on operant schedules simulating successive encounters with prey items. In the second experiment, long sessions occurred in one environment and short sessions in another. Commons, Alejandro Kacelnik, Sara J.
Increases and decreases in reinforcement probability produced both transient and longer lasting changes in timing behavior, once again, in accord with predictions of BeT. Time available for foraging time horizon should influence optimal behaviour according to some foraging models. The augmented version of the behavioral theory further improved the correspondence between the theory and the correlational data reported by Gibbon and Church. These observations agree with the results of controlled experiments showing more patch persistence in the face of unsuccessful search when initial patch quality is low than when it is high. We present a simple rule which asymptotically learns about and optimally exploits this environment.