Week #5 - Sec 2.5 Readings Comment (Due Thursday).

| 24 Comments

Please use this to comment on your reading for sec 2.2. I'll leave it up to you about what and how you would like to comment, however I would ask that you attempt to write using behavioral terms. I will also use this as a way to 'time stamp' that you read the section on or before Thursday.

Let me know if you have any questions,

--Dr. M

24 Comments

This section was very confusing to me and hard to keep the schedules of reinforcement straight. However, I also think that this section related to more real world examples. Such as the fixed interval schedule receiving a paycheck every two weeks. It is something I can relate to, so it makes fixed intervals easier to understand. It was hard to come up with examples for FI, FR, VI, and VR because it is hard to keep them straight, but once you find ones that work they make a lot more sense. If we could just go over these concepts in class it would be a lot more helpful in order to clarify between the terms and come up with better, real life examples.

I agree with Wesely. I thought this reading was the most confusing one so far. I had a lot of trouble thinking of examples, and I couldn't keep the concepts straight. I think it would definitely help if we could talk about these again in class. Hopefully then I will be able to remember the difference between concepts. I did, however, think that the page 18 examples were helpful. I will definitely have to read this part of the chapter again to fully understand it.

For section 2.5 I learned about scheduled reinforcements and what was known as the operant chamber. An operant chamber is basically a chamber that contained rats or pigeons and reinforced the animal’s behavior every so often. I also learned that biological continuity is basically the same thing and not only applies to animals, but also to humans. Biological continuity is the belief that all living this are arranged by their complexity – as stated in the reading. This section talks about continuous reinforcement which happens a lot in the real world. I never really thought about a TV remote or my key board as being a continuous reinforcer. The reading also talked about a ratio schedule. This involves emitting an action a certain number of times before the action would be reinforced. I also learned about interval schedules, which I had learned before, but this time it was used in terms of reinforcement. In addition this section talked about fixed ratios, variable ratios, and ratio and variable intervals. With fixed ratios, it takes a certain number of responses before there is reinforcement. For example, a person has to pull the handle of a slot machine 30 times before he or she will win something. With variable ratios, an average number of responses are required for reinforcement. With ratio and variable intervals, the only difference is that the reinforcement comes after a certain amount of time. For example, the bird is reinforced for pecking the bird feeder after 15 seconds. Or on the average, the bird is reinforced every 15 seconds. I really liked the exercises in this section as well. It helped me get used to thinking about these terms and when to use them or recognize which was which. I really feel like I understand this information.

I agree that this chapter was confusing for me, and hard for me to keep things straight. The examples helped but I still struggled with coming up with examples on my own, further clarification would be helpful in class. I may just be other thinking the concepts. I did like that some of the new concepts I could relate to my everyday life, such as interval schedules. Also many concepts that I knew existed were given a name, for example fixed ratios. You gave examples of reinforcers than I never thought would be a reinforcer, for example TV remote. I think it makes you look outside of the box to see why we behave the way we do.

I also thought this section was relatively challenging. While the examples you used seemed to make sense, especially the gambling ones, it was somewhat difficult to think of unique ones off the top of my head. Still, I liked the idea of the operant chamber and how learning mechanisms can be applied from lower animal species to higher animal species such as humans. This biological continuity is in reference to the belief that all living things in the world can be arranged according to their complexity. The idea of the great chain of being was a helpful illustration to me for this concept. Regarding ratios and intervals, the example of the slot machine was helpful: the number of times we pull the handle (ratio) and the passage of time (interval)--all before reinforcement. Regarding fixed and variable, if the slots are fixed at say 24 pulls, then on the 24th attempt you will be reinforced. Within a variable schedule of reinforcement, the slot may be manipulated so that reinforcement occurs on average once every 30 attempts. This is why, I believe, many slots players watch other players and their machines, because if someone has a long stretch of losing and leaves their machine, oftentimes someone will sweep in their quickly because they know the schedule of reinforcement is likely due.

I also found this section to be somewhat challenging at times. It was hard for me to come up with three examples of FR, FI, VR, and VI. I could usually come up with one fairly easily but it was difficult to come up with three. Even though I have learned about these schedules of reinforcement before, it's difficult to come up with examples, even if you understand the differences between them. It was nice to talk about everyday activities as being continuously reinforced, such as turning on the faucet and water coming out. We never think of something like that as reinforcement, but I guess that's part of learning about behavior modification. Everything we do can definitely be put into behavioral terms, and this definitely proved that.

I guess this section was pretty much review for me. I've learned about skinner boxes and schedules of reinforcement before. I know that to learn something the fastest you use fixed ratio schedules, because if you know exactly how many times you have to do something to get reinforced you will do that behavior as much as possible to get as many rewards as possible. What I thought was interesting was thinking of real life examples for the intervals and how they effected me. I thought of studying as a variable interval schedule, I study all the time, but only get reinforced once and a while for it, it makes me less motivated to do so. On the other hand I play call of duty, get so many frags and get reinforced on a fixed ratio schedule, with a new rank and new stuff to play with, making this behavior very addictive, if a bit predictable and boring at times. Also I go to the casino and play blackjack, this reinforces me on a variable ratio schedule. I don't know how many hands it will take, but eventually I will get reinforced. I think this aspect contributes greatly to people becoming gambling addicts. They keeping thinking next hand will be the hand they win. Fixed interval made me think of payday. I get paid every two weeks on Tuesday, usually making it the best day of that week. I hope my examples help.

What I took from Section 2.5:

Wow! It was crazy trying to keep VI, VR, FI, and FR straight. The concepts make sense to me which is a relief. I will admit that I had a heck of a time coming up with real life examples for VI. I don't think I'm thinking creatively enough or at least I'm not thinking like a behaviorist when it comes to these concepts. What distributes at different periods of time? Like you mentioned in the section, possibly lottery machines. Other than that, I can only think of varied numbers such as VR examples. I will say that I like how you made us provide examples. It helps me to applying the newly learned knowledge. I just don't like it when I can't think of anything. I'm interested to hear from others in class what kind of examples they have.

VI, VR, FI, BI, MI, AHHH! Section 2.5 was a little intense ;)
The beginning I at least understood. The operant chamber of animal testing goodness and I found the first few pages of that and continuous reinforcement easily understandable and I actually really liked the keyboard example.
Then we got into the fixed and variable schedules of reinforcement. As for an upcoming book like I said you have really good examples that are really helping my poor soul try to understand. I just don't really know if I did the examples right. Also, the graphs in this section seemed to help me a little; visuals are always my thing :)
In the end, I really hope we will be further discussing these in class.. maybe a 5 minute review as a class before splitting off into groups since after looking at previous comments this section seemed to confuse a lot of people. It would be slightly futile to have groups of people discuss how much they don't understand the concepts.

Guess I'm hopping on the confused train. I had to reread this section several times to even begin to understand it. And I'm still confused! I learned the basics of Skinners findings and theories in Intro to Psych, but I didn't realize it was this intense! I wish the section where we were supposed to answer FI, VI, VR, FR had answers to go with it. I wasn't sure if I was doing it right at all. I have to agree with most everyone else, too. Thinking of real life examples was a challenge. I'm anxious to get in class so that we might be able to talk some of this out and give all of us a better understanding of what's going on.

This section was definitely challenging. In retrospect, I don't think there was a ton of information, nor do I think that the concepts themselves were that hard to grasp in and of themselves. On top of all of that, it is completely logical to have all of these in the same section; it wouldn't make much sense to break these up into separate sections, as they are all closely related. But still, after reading and completing the questions (which took a good amount of time), I feel like I bit off a giant chunk of information. Nearly everything in the chapter revolved around 2 ideas: fixed vs variable, and ratio vs interval. Simple enough. The difficulty really comes about in the details and nuances of each of the ideas, which I think led to difficulty with establishing my own examples. Just like the other sections, once I read the definitions and examples in the text, I felt like I had the gist of everything, but then when it came to formulating my own examples it was tough to really come up with stuff I felt matched the criteria.

One of those nuances that was really giving me problems was when it came to variable intervals and variable ratios. After getting through the entire section, I think that a variable is just anything that can be pinned down to an arbitrary time frame or range of frequencies, but not any number in particular. Early in my reading, though, I was under the impression that a variable had to be more concrete, that it needed to have a distinct range, or if not a distinct range, just that there was a predetermined target that it had to reach at some point (I think this misunderstanding came from the example of the lever pulls under "Fixed versus Variable Schedule of Reinforcement", I adhered to the example too closely). The sort of example I had in my head was from my old place of employment; my boss was very strict about not letting people get over 40 hours a week. This meant that if you were scheduled for 40 hours, and you stayed an extra 3 minutes on Tuesday, that you had to "lose" those 3 minutes somewhere before the end of the week (by clocking in 3 minutes late/ clocking out 3 minutes early one day). This would mean if you were working 5 days in the week, that you had your "predetermined target" of 40 hours, and that by the end of the week the different days of the week would balance out to be exactly 40 hours. I know that probably seems complicated, and it might be difficult to follow (read: not make any sense at all), but that's the way I had variable reinforcements conceptualized in my head, and it was making it extremely difficult to come up with examples that matched that sort of strict criteria. Another thing that confused me here was on the "What would the notation be for the following? 'On the average...every time Juanita throws a ball of paper in the trash she makes it in". If Juanita threw the ball and on average (mean, not median or mode) made it in every time she threw it, then wouldn't she make it every time? And wouldn't that be FR1, (not VR1) because you could never make a ball of paper in the trash with 0 attempts (and never be able to take 2 shots, because without 0 shots you would never be able to bring this average down to 1)? Maybe stuff like this is why it took me way too long to finish the reading, I swear I'm not nitpicking that's just what went through my mind.

Another question I had was whether this only works with reinforcement. With my already screwy conceptualization of variable, my VI example was a light bulb left on continuously, but I don't see any reinforcement when it burns out, so is that really VI?

I also forgot about continuous reinforcement for a while during the reading and just considered it fixed ratio, that threw me off for a while.

I'm rambling, so I'm going to cut this comment off. I don't think I would read this entire thing haha. I look forward to class on Tuesday to get a better grasp of these ideas, though.

I've studied the schedules of reinforcement before but generally just as an overview. I thought it was a little confusing so I really don't have much to say about it. I think if I read it a few more times I'll be able to understand (it may also help if I don't read it after working so many hours). I'm going to be going over it a few more times before class to see if I can get a grasp on it. Otherwise, hopefully we'll be able to talk about it.

I think I'm going to go over this section again, I'm having trouble thinking up examples for the exercises.

I will agree that this is the most challenging section of the book thus far. The part that stuck out to me was the difference between ration and interval reinforcement schedules. An example of a interval schedule would be waiting for food to cook. It doesn't matter how many times you check to see if it is done, it will still require a certain amount of time to cook. For an example of a ratio schedule I am going to use the house code for my sorority house. Because it normally takes right around 3 tries to get the code right, either because of operator error or because the code box gets too cold.

I am also going to have to review this chapter again. Maybe rereading it will help me get a solid idea.

This was by far my favorite reading in the class so far. The relative strengths of the various schedules of reinforcement (i.e. highest to lowest VR, FR, VI, FI) is intuitive, after first figuring out the terminology. The final section explaining the use of these various schedules for a bar-tender really helped pull it all together for me.

Again lots of advice I've gotten about performing good magic I now see follows in line with good behavioral modification practices. Basically if you want people to do what you want them to, that is, if you want to streamline the modification of other people's behavior towards desired target behaviors, or, minimize the time and difficulty with which other people learn new things, you ought to rely most frequently on positive reinforcement, variable ratio schedules of reinforcement, and when necessary disruption of operant behavior rather than punishment.

An example of a variable ratio (VR) schedule of reinforcement in a magic show is giving a round of applause to the helper. Basically, the audience has been conditioned to know that if they are helpful they’ll get applause, but they do not know how many helping behaviors they’ll have to emit during the effect which they have been selected to participate in, in order to get that applause. Thus, it is a ratio scale because their reinforcement comes because of their behavioral output, not due to passage of time alone, and it is variable because sometimes they have to simply name a card, other times follow numerous directions in order to get the applause (would be difficult to interpret how many helping behaviors on average each person performs, maybe VR10). If I reinforced them after every helping behavior they emitted it would be a type of continuous reinforcement (FR1=CR).

Often performers giving a longer show offer a fixed interval (FI) schedule of reinforcement by letting their audience know that if they sit quietly and watch the show (emit the appropriate target behaviors) there will be a 20 minute intermission (FI20min) with free refreshments (a form of positive reinforcement) after the first hour of performance.

An example of a variable interval schedule of reinforcement would be reinforcing audience members after different amounts of time have passed. Two magic principles come to mind which give an example of how this is used in magic: variability and increasing impossibility. To stop the people from getting board, after certain amounts of time have gone by you vary the type of effect you’re performing, yet always keep making the “impossibility factor” increase. Ex: After a 5 minute demonstration reinforce audience with an effect higher on the “impossibility scale”, then perform 8 minutes of material lower on the scale (but higher than the first 5 minutes), followed by another reinforcer which is higher on the impossibility scale than the previous reinforcer (the one done 8 minutes before after your first 5 minute set). It may appear to the audience that they are being reinforced on a VR type of schedule, since first 2 effects were done followed by a miracle, than 5 followed by miracle, etc. But the performer is doing this based exclusively on the length of time which has passed since the last moment of amazement. Since the reinforcement is based on time not behaviors emitted it is an interval and not a ratio schedule.

This section was very long. But I learned some new information. Continuous reinforcement (CR)refers to getting reinforced every time a behavior is emitted. Ratio refers to the number of behaviors emitted while interval refers to the amount of time passing between each reinforcement period. The graphs throughout the reading was also helpful. VR will elicit a high steady rate of responding with marginal pause. FR will elicit high steady rate of responding with some pause. VI will elicit a moderately steady rate of responding with marginal pause. FI will elicit increase gradually until the reinforcement with a considerable pause afterwards. I will need help with other example of VR, VI, FI, and FR.

I also agree that this section was a bit challenging for me to understand, but this could just be that the subject matter is more difficult than what we have been discussing in the past weeks. I think I do understand the differences between variable and fixed reinforcement, and also between interval and ratio reinforcement. It makes sense that the interval is for time, and then the ratio is for the number of times. Also, I understand that variable means the reinforcement can happen at random, while fixed means it always happens every Nth time. I have learned about these concepts in previous classes and understood them then I think. For some reason though I had difficulty coming up with some examples, especially at first. It became easier as I got further along in the chapter. I really did appreciate all the example and opportunities for me to think of examples. I think that really helped with this more challenging topic.

I think continous reinforcement is so important and so important to do correctly. The desired behavior needs to be reinforced every single time it occurs. Used best in the initial stages of learning. It is important to create a strong association between the behavior and the response. It seems to me that continous reinforcement would be more effective than any ratio reinforcement. Gambling is an example of ratio reinforcement and I guess is effective in parenting also. I guess you can't always reward a kid every time for a good behavior. I like to reading Skinners work in reinforcement.

I thought that this section of reading for BMOD was very interesting. It seems that examples seem to be hard to come up with, in this case i mean more than just one example for each case (maybe because you took all the good, obvious ones Otto!). But in the end, i was able to come up with examples for each. I guess the one thing that seemed confusing was coming up with specific examples for the variable and fixed scales of reinforcement. It seems that which ever one you choose, whether it be variable or fixed, is all upon how one interprets the present situation. It would seem that either its the time that were talking about and whether one is expressing a certain behavior or if it is something that is related to a behavior one would exhibit in such a situation. To me, most things seem to be variable in nature but are actually fixed scales of reinforcement. I didnt really have trouble getting through the reading and understanding what was going on. But upon having to think of examples for each situation i came to a road block and probably was thinking harder than was necessary. I really liked the section about Skinner and his Skinner boxes. The Skinner boxes were always something that i found very interesting. Overall this was a really good section and something that might take a little bit more time to fully grasp but i feel as thought ive got a good understanding of what is going on in this chapter and how to apply each set of fixed, variable, interval and ratios to different scales of reinforcement.

Learning about the schedules of reinforcement in section 2.5 took reading over a few times for the information to finally click! While it was somewhat easy understanding the differences between FI, FR, VI, and VR, when it came down to it, it was much harder for me to come up with real life examples of each type of reinforcement. What really made the difference was realizing that variable interval is basically an average amount of time before the reinforcer occurs, given a minimum or maximum amount of time that may pass. And while fixed ratio is when a person or animal is reinforced after a constant number of responses, variable ratio is where reinforcement is based on an average number of responses. But that the reinforcement has a minimum and maximum range, where the person or animal can be reinforced at random within that range.

The scenario examples given in this text were very helpful in distinguishing the different types of reinforcement and making the terms that much more concrete!
I also found the graph noting the different response rates based on the type of reinforcement used to be very interesting. This put a different perspective on learning this material because it measured out the differences in learning according to each reinforcement used. The section noting on how response patterns differ explained that an organism reinforced through variable ratio is likely to emit the behavior more frequently because they never really know when they will be reinforced. An organism reinforced through fixed ratio responds at a more steady pace because their reinforcement is constant. Although this made sense, I am still a little unclear about the term ratio-strain. More clarification or examples would be great :)

Like many of the previous posters, I too had to read this section more than once. Keeping FI, FR, VI, and VR separate is difficult. I can for the most part narrow it down to one or two off the bat, but then think hard for a second or go back and read the examples to answer the question. Like anything, with more repitition Ithink it will come easier. Coming up with my own examples are tough. There are such good examples in the readings that it seems like you take all the easy ones to think of. If there was any section so far in this class that I would have trouble with on a test it would probably be this one. One of the best examples was a graph showing the cumulative number of responses over time, with VR, FR, VI, and FI. This is helpful to see what schedule would be more reinforcing in different situations.

This is a good section. It makes me wish UNI still had an animal lab so we could test these theorems on our own rats. VR is the best way to elicit behaviors from test subjects, I keep that in mind when I try to modify other peoples behavior. I have been over schedules of reinforcement in several of the classes I have taken but this chapter was a very solid review.

I did not find this section as confusing as many of the previous posters have. I have covered the concept of VI, FI, VR, and FR in other psychology classes and it all made sense to me. It was a bit difficult to think of examples for some of the reinforcement schedules, though. Personally, I believe the variable interval reinforcement works the best because the individual being reinforced does not have any idea when exactly they will receive the reinforcer. I did find some of the charts that were included in the section a bit confusing, simply because there were lines and labels everywhere!

I agreed with everyone else for this section that it was a little confusing. I had to read it over a few times and I still had trouble trying to come up with examples for the questions. The schedules of reinforcement were the hardest thing for me to grasp because I had never really been taught them before. I had trouble keeping FI, FR, VI, and VR straight. Going over some of these concepts in class helped with my confusion and now I can say that I understand the schedules of reinforcement!

Leave a comment

Recent Entries

Reading Activity Week #1 (Due ASAP)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Welcome to the behavior modification hybrid class. We would like…
Topical Blog Week #1 (Due Friday)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 By now you should have completed Reading Assignment #1. This…
Reading Activity Week #2 (Due Monday)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Please go to the following blog page: http://www.psychologicalscience.com/bmod/abcs.html Please read…