Reading Activity 2.5 Week #7 (Due Tuesday)

| 23 Comments

Please read section 2.5.

After reading the section, think of all the terms and concepts used in all of sections (1.1, 1.2, 1.3, 1.4, 1.5, 2.1, 2.2, 2.3, 2.4, 2.5) please respond to the following questions and try to use these terms and concepts as you respond to the following questions.

What did you like about the section? How does it relate to the sections you have covered so far? What are three things you will remember from these sections? What if anything would you like me to be sure and go over in class when we meet?

Do you think schedules of reinforcement can be applied to the real world? How so or Why not?

Include a list of the terms and concepts you used in your post. (example - Terms: positive reinforcer, extinction, reinforcer, discriminative stimulus...)

23 Comments

Schedules of reinforcement have always been confusing to me. Something I learned before tests but never really understood. While I think I have a solid grasp on it now, I would like it to be something we cover in class, even just generally, to make sure I am thinking the right thing.
In relating to things we have already learned, we put a name on what we have already been learning. Continuous reinforcement is when we are reinforced every time the behavior occurs. There are used when first using behavior modification in order to get a behavior under control. The rest of the chapter also related to reinforcement, and schedules of reinforcement, which are a part of biological continuity. Biological continuity is the belief that all living things can be arranged according to their complexity. From this we formed reinforcement schedules, to help in learning.

There are four main categories that I will remember after this chapter. They fall under one big category called intermittent reinforcement. This is when reinforcement happens, but not all the time. The first of these categories is ratio schedule; this refers to being reinforced after a specific number of tries. A ratio is also known as a fraction such as 1/10. So one out of every ten tries (or an average of one out of ten) reinforcement is achieved. The next category that relates to ration is interval. This allows for a specified amount of time to pass before reinforcement occurs.

The other two connected categories are fixed and variable schedules. Fixed schedules mean that something happens every so many times or minutes. Every fifteen minutes would mean fixed interval. Every fifteen tries would be fixed ratio. Variables mean averages. An average of every fifteen minutes refers to variable interval and an average of fifteen tries refers to variable ratio.

These four together can be mixed in variable interval or variable ratio and fixed interval or fixed ratio. These four are indicated by VI, VR, FI, and FR, with CR for continuous reinforcement.

I think these can all be applied to the real world. For example, our readings are checked arbitrarily, not every class, so this is variable ratio reinforcement, because an average of so many class periods our attempts to fill out our chapter is reinforced. An example for variable interval would be when I used to practice my piano lessons, for an average of every two or three hours of practicing, I would fully learn a song. My roommate has all sorts of troubles trying to lock her door, it takes four tries and (she believes superstitiously) her putting her foot on the door, before it will lock from the inside. This is a fixed ratio example. Every week I make enchilada’s for my roommates and I. I get the recipe from the family cookbook, and it says that it has to stay in the oven for 30 minutes before it is done, (and by then it smells pretty good if I do say so myself!) so I and my roommates are reinforced with my favorite dinner.

Terms used: Superstitious, reinforcement, continuous reinforcement, variable interval, variable ratio, fixed ratio, fixed interval, intermittent reinforcement, biological continuity

This section appealed to me because it explained the schedules of reinforcement clearly. It seems like these terms confuse me, mainly because of the similarity of the terms, and how easily they can be mixed up. Additionally, the 2x2 contingency table used to demonstrate the terms fixed ratio, fixed interval, variable ratio, and variable interval was also useful.

This section relates to the sections we have already done because it extends our previous knowledge of reinforcement, but relating it to the real world because we are seldom reinforced every time we emit a certain behavior. Also, this chapter relates to our previous sections because it is easier to manipulate a person’s behavior easier by using schedules of reinforcement rather than just reinforcement. If a person is reinforced every time after a certain behavior is emitted then it is called continuous reinforcement. After presenting the term continuous reinforcement, the terms fixed and variable reinforcement were introduced, which I feel explains the term in a more precise way.

It is clear to see that one of the main things that I will remember are fixed and variable schedules, and ratio and interval schedules. A fixed schedule is a fixed amount held constant, as the name suggests. An example of this would be a slot machine. An interval schedule is the amount of time that the reinforcement is being used. A ratio schedule can be exemplified by using numbers, such as how many times it takes you to do something. A variable schedule is basically an average, such as the average time it takes you to something.

After reading this section, I began thinking to myself of how these terms can be used in the real world. It then came to my realization that it is easy to think of examples because they occur more often than what I initially thought. For example, the amount of times I go to the gym and then see myself lose weight is an example of a variable ratio because if I go more than just once or twice a week, then I will see better results. Also, the amount of times I have to flick a lighter to light a candle is a fixed ratio because it takes a few times to flick it to actually get a flame.

Terms used: fixed ratio, fixed interval, variable ratio, variable interval, continuous reinforcement, emit

This section really took the old reinforcement concept and broke it down into different schedules. Obviously we are reinforced differently depending on the situation and it is important to know how reinforcement works in order to modify our behavior or others to get that desired reinforcer. I like this section a lot because it forced us to relate every new concept to the real world. This helps me really think about how the different reinforcements work and why they fall into the seperate schedules.
One thing I will take away with me from reading this chapter is the idea of a variable interval schedule. An example of this would be slot machines. Although people are reinforced at complete random for some reason this causes them to want to continue the behavior more. It seems as though since people arent always reinforced they would get frustrated and want to stop the behavior but instead it keeps them interested and continuing the bahavior at the possible chance they will receive a reinforcer.
Another part of the section that seemed to stick out to me was the difference between ratio and interval. They both have to do with at what point a behavior is reinforced. They are very close and the difference is merely "time" vs. "times. Intervals deal with amounts of time such as minutes, seconds, ect. while ratio deals with times such as number of attempts. They are slight differences but they make a huge difference in the outcome and definition of certain schedules.
The final thing I will remember from section 2.5 is the fact that when you really think about it every behavior we do is the result of some schedule of reinforcement. We are always altering our behavior if part of a schedule is changed. We put new batteries in our remote if the continuous reinforcement or no longer occurs of the tv turning on when we press power. If we werent trying to acheive reinforcement of some sort nobody would care about doing anything.

Terms: reinforcement, schedules, variable interval, ration, continuous

In Chapter 2.5 the thing that confused me the most was, the graphs showing VI, RI, etc. One thing that I will remember from the reading is the simple form of Variable Schedule of Reinforcement. From there are two main different types, Fixed Ratio and Fixed interval. And then also variable ratio, and Variable interval. How I see it, Fixed Ratio is after a certain number of times the action is emitted, the behavior will be reinforced. Variable ratio is more about time, and not numbers. Variable interval is “on average” when it is reinforced. Fixed Interval shows payout after a set period of time. Again after saying and typing it out.. I still am getting confused.
Another thing I will remember, but not understand is all of the info with these lines and graphs. Less of a slope means learning takes longer. When it is Fixed it shows the steady rate of reinforcement. Got that much!
The third thing I will remember is the schedule of reinforcement presented at the beginning of the section. I really understand the difference between continuous reinforcement, and intermittent reinforcement. The things that could be considered continuous reinforcement could at any time be intermittent. I am not sure if anything could be considered fully dependable. There is always something that could either be off, or undependable.
This section overall relates to the previous sections by going further into the restraints and qualities of reinforcement.

Terms: VI,Ri, Schedule of reinforcement, Fixed Ratio, Fixed Interval, Varible ratio, Variable interval, reinforcement, continuous reinforcement, intermittent reinforcement.

Reinforcing schedules like ratio/interval have kind of been confusing for me in the past just because of how it was explained to me. This section did a good job of that and i understood it very well and have a good handle on what the terms mean and how they are different.

It relates to section that we've covered so far by showing that there's more than just continuous reinforcement. Often times in psychology classes when we talk about behavior, the primary focus is on continuous reinforcement, but this section shows that that's not always the case. It explained how we have reinforcements that are based on intervals and ratios and how those are just as important to forming behaviors as continuous reinforcements. It explained all of the terms well and applied them to everyday life to make them easier to understand.

One thing i remembered is how it did an excellent job of explaining the difference between ratio and interval. I also found the table useful that helped you figure out if it was fixed interval, fixed ratio, etc. Another thing that i'll remember is the very helpful examples it gives and how it applied it to help with understanding.

terms: ratio, interval, reinforcement, fixed, variable

What I liked about this section is that it breaks down schedules of reinforcement, and it makes us think about examples to differentiate between them. This relates to other sections by learning about reinforcement and what elicits a person’s behavior, by introducing specific contexts that we have to look at.

Three things that I will remember from this section are:

1. Ratio Schedule-you do something X amount of times to get the proper reinforcement.
Interval Schedule-certain amount of time passes for you to get reinforced.

2. Fixed Schedule- the amount of time it takes to get the reinforcement is held constant, for example a paycheck from work (every 2 weeks on a Thursday).
Variable Schedule-the amount emitted behavior varies for the same reinforcement, for example, a hamster needs to push the lever for food, sometimes he gets the food, if he pushes it 2 times, sometimes 4, sometimes only once.

3. The importance to time-
Constant, this refers to if you are under control of some variable you are responding all the time.
Post-reinforcement pause, after the reinforcement has been delivered there is a break or a pause.


Schedules of reinforcement can be applied to the real world. When we look at a certain behavior occurring either with ourselves, with others that we associate with or organisms in nature, schedules of reinforcement gives us the ability to break down the occurrences and analyze what, when, and to whom something is happening.

Terms Used: Ratio, Interval, Fixed, Variable, Emit, Reinforcement,Constant, Post-reinforcement pause

This section covered a pretty large amount of new information which was nice in a way to hear something other than the usual, but it did take some more effort on thinking about the material and breaking it down. It relates to the Reinforcement schedule and the effectiveness of the reinforcer. The three things that I will remember from this particular section is: 1) continous reinforcement schedules are necessairy with reinforcing the elicited target behavior in the begging and constitues a majority of everyday behaviors such as pressing an elevator button to use the elevator. 2) Intermitent reinforcement does not always lead to extiniction and there are several types based on time (interval) the number of times (ratio) and either at the same rate every time (fixed) or various combinations (variable). 3) different types of reinforcement schedules elicit different responding rates.

I think that schedules of reinforcement can be applied to the real world in some situations, it has a slower extintion rate and is more practical. Think about a prodution line worker who gets paid a bonus for every x amount of parts completed that is more reinforcing than other forms.

Terms used: continous reinforcement, intercal, elicit, target behavior, reinforcement procedure, reinforcer, variable, fixed rate

What I liked about this section was continuous reinforcement and intermittent reinforcement. You don’t really think about it but there’s quite a big difference between how you react to both reinforcers.

It relates to the previous sections because we’ve continuously been learning about reinforcement and the different types. This section just adds on to that information. There are actually four things I’ll remember from this section and that is fixed interval, fixed ratio, variable interval, and variable ratio. I’ll remember them so well not only because that’s what the whole section was about but because it’s just another piece of the puzzle of behavior modification. To understand these four variables, I found the table to be really helpful.

I would like to go over more real world examples in class to help better understand these concepts. I did have some trouble applying these concepts but the examples at the end of the section helped so yes, I think they can be applied to the real world. They can be applied because it makes receiving a reinforcer easier by knowing what type of schedule to emit a certain behavior in.

Terms: continuous reinforcement, intermittent reinforcement, reinforcers, fixed/variable interval, fixed/variable ratio, behavior modification, emit

This section had a lot of new information. The graphs were really helpful and so were the paragraphs devoted to examples of the new terms. The real world examples at the end showed that Schedules of Reinforcement are present in our lives quite often; for example, most people are reinforced for working by earning a paycheck every 2 weeks or so.

From this section, I will remember the difference between Fixed Ratio and Interval Ratio because of the examples given. I’ll also be able to remember what the term operant chamber refers to because it is exactly what it sounds like. The third thing from this section I will be able to easily remember is the real world example of a Schedule of Reinforcement pertaining to the bartender. When I first started reading the section, I was really wondering what an example of “continuous reinforcement” would be because it seems unlikely that a behavior can be reinforced every single time it is performed. But the section gave the example of pressing the “a” key on the keyboard. All of the examples giving in the reading seemed to pertain to technology, like computers or T.V.s.

Terms used: fixed/interval ratio, continuous reinforcement, schedule o reinforcement, operant chamber

Upon reading this section, I skimmed through it to see what sort of journey I was about to embark on. Needless to say, I was intimidated. It looked like a lot of confusing behaviorist code that I wasn't ready for. However, this chapter was well organized and delivered at a slow enough pace for us to understand. I liked the fact that I was able to translate all of that bevahiorist jargain by the end of the chapter!

This section relates mostly in particular to the sections about reinforcement. The authors described individual schedules in which reinforcement can be administered. Some of these schedules such as Fixed Ratio or Fixed Interval were subject to extinction, another section we've looked at.

I think I will remember what I am supposed to remember in this section. A Variable Ratio is the schedule at which a reinforcer is delivered an uneven amount of times, yet averaging out to a standard number. A Fixed Ratio schedule is when a reinforcer is administered after the behavior is emitted the same number of times. For example, if Bobby emits a room cleaning behavior five times, his mom will reinforce him with a slice of pie. Interval schedules of reinforcement involve time oriented modification. No matter how many times the behvaior is emitted, the reinforcement won't arrive any faster, it is either fixed, or on a variable scale.

I feel as though I understand this section pretty well. As always, one thing that gave me trouble was coming up with examples for all of the schedules.

Schedules of reinforcement can undoubtedly be utilized in the real world. As mentioned, we are not trapped in operant chambers our entire life. Our every moves aren't being monitored by anyone. Introducing the schedules to our knowledge of reinforcement in a sense, makes these terms MORE applicable to the real world.

Terms: emit, variable ratio, fixed ratio, variable interval, fixed interval, extinction, reinforcement, schedule of reinforcement

This section contained a lot of new information. Even though I read the material twice, I think it might take me awhile to get a thorough understanding of it. This section went into further detail of reinforcement. I like learning about reinforcement, since it typically is the most effective method of behavior modification. I like being able to take what I am learning in class and apply it to everyday life.

The three things I will remember (with practice and review) are:

1. The differences between continuous and intermittent reinforcement. Continuous reinforcement occurs when we are reinforced each and every time a behavior occurs. This isn't always the most effective method. Intermittent reinforcement occurs when a behavior is reinforced once in awhile.

2. The different types of intermittent reinforcement. Fixed ratio (FR) is used a fixed (set) number of times. Fixed interval (FI) occurs when reinforcement is given after a constant period of time. Variable ratio schedules (VR) occur when reinforcement is given after approximately a certain number of responses. There is less predictability of behavior with this schedule. Variable interval (VI) occurs when reinforcement is given on average around a certain time (for example 1 hour - it could be 45 minutes or it could be 1 hour and 5 mintues).

3. Variable Ratio (VR) schedules elicit the fastest learning, whereas Fixed Interval (FI) schedules elicit slower learning.

I had trouble understanding ratio strain, so it would be beneficial to me if you went over it in class. I think that schedules of reinforcement can be applied to everyday life. Once I am more comfortable with these concepts, I will be able to come up with examples more easily. I related to the bartender example though, because I work in a restaurant. It became more interesting to me when I could relate it to my own life.

Terms: continuous reinforcement, intermittent reinforcement, reinforced, fixed ratio schedule, fixed interval schedule, variable ratio schedule, variable interval schedule, elicit, ratio strain

In this section I liked that the examples the text asked of us didn’t leave it us hanging. This section was followed by the next with clear examples and a key. I really liked this because I was trying hard to actually understand the material and not just fly through the answers.

This relates heavily to the other sections of that text in that it is diving much deeper into the intervals in which reinforcement occurs. Things continually are broken down to more specifically explain behaviors and try and find target behaviors that we can then emit through reinforcement.
From this section I will remember the differences in reinforcement schedules. I remember them best through the use of the examples in the text. A fixed interval can be described by someone cooking muffins for 17 minutes, because any longer and they would burn. A variable interval can be described by the starting of an old car. Sometimes it would take several minutes and other times it would only take one try. A fixed reinforcement can be described as needed to press a lawn mowers button 5 times before it would start. It always had to be pushed 5 times. Finally a variable reinforcement was described by someone having not internet access at home. Because he uses dial up connection, it takes a few times before he gets on the internet. Second I will remember the explanation that continuous reinforcement refers to getting reinforced every time a behavior is emitted. Thirdly we were told us the ratio that then tells us the number of behaviors that are emitted while whatever interval then refers to the amount of time that has passed by. These are all very specific things that are good to have a solid knowledge of in order to further break down behaviors.

I think schedules of behavior can definitely be used in the real world. They help key in on a behavior and really break it down. The examples used above also show us how they are related to very real life events. Reinforcement is a powerful key for us to use when modifying behavior, and I like learning more about it than I seem to do learning about punishment because it brings about positive, and not aversive ways, or dealing with people in our every day lives.

Terms: reinforcement, emit, behavior, target behavior, variable interval, fixed interval, variable reinforcement, fixed reinforcement, ratio, continuous reinforcement, intervals, punishment, positive, aversive.

This section related to the others we have read because it was all about reinforcement. We have already studied a lot about reinforcement and its affects but now we are getting into the schedules of reinforcement and how to properly proceed with reinforcing someone. I likeed this section because I enjoy the idea of reinforcement and learning more about it. I also find it interesting that a reinforcement schedule works like this. It is intersting that organisms enjoy being reinforced enough that they will continue a behavior that is not being reinforced in the hope that it will be reinforced again sometime.

The three things that I will remember are:
Continuous Reinforcement which is when we are reinforced every time we emitt a certain behavior such as typing on a computer and everytime we hit a key a letter pops up.
Then the intermintant reinforcement schedules:
These are ratio which refers to a certian number of times you must do something before you are reinforced. I must hit the button on my old tv 10 times before it will turn on. That is an example of a ration schedule.
Interval schedule is refering to the amount of time between reinforcements. Such as the amount of time it takes your popcorn to pop in the microwave.

I think that reinforcement can absolutely be used in the real world and is whether we realize it or not. If you do a good job at work you might get a raise. Or if you do well in college you may get a scholarship. If everyone used reinforcement it would give people incentive to work towards something and do the right thing rather than to avoid punishment.

I liked the new information in this section. It was nice to add some newer concepts into the readings that aren’t extremely related to what we have been previously studying. It relates to the sections we have previously covered because we are still dealing with reinforcement concepts. Also, it the same style in which we have been learning new material and concepts.

The things I will remember from this section are the different types of reinforcement schedules. I’ll remember this because they were the focus of the reading. Also, there was a fair amount of repetition in this section and I think this was because the concepts of reinforcement schedules can get somewhat confusing and difficult at first. These reinforcement schedules show that reinforcement is not always administered every time one emits the target behavior. I think these are used somewhat more often in everyday life, because I definitely do not see people being reinforced every time that something is done correctly.

I would just like to continue going over the different reinforcement schedules. I am still having a difficult time learning them and coming up with examples and explanations as to why a specific example is or isn’t one of the four we covered.

terms: reinforcement schedule, reinforcement, emit, target behavior, reinforced.

I felt that the section clearly defined each new term. I appreciate the opportunity to apply my learning one step at a time. I think that the other chapters caused me to observe the world around me much more behaviorally, and these new terms allow me to challenge concepts of time and number in terms of behavior.

First of all, I feel that it was interesting to see the schedules in real life. Some examples were easy, especially fixed interval. The sun always rises and sets every 24 hours. On the other hand, a variable interval would be like the time waited in traffic each days. Sometimes traffic is fine, and other times it is backed up really bad. In thinking about child rearing, I find that it would be important to be constant in my reinforcement of desirable behaviors. This is something I thought about due to the beginning of the section. Finally, I found it worth noting that there can be several schedules occurring at once. The amount of cars in traffic (VR) effects the time spent in traffic (VI).

Terms: schedules, fixed interval, variable interval, constant reinforcement, variable ratio.

This section not only have a lot of new information about reinforcement but also enable to have expanded knowledge of reinforcement. Especially, I liked the concept that is continuous reinforcement. I could find some examples of continuous reinforcement easily. I did not recognize my behavior related to continuous reinforcement before learning this chapter.

This chapter relates to our previous chapters which are reinforcement and diverse types. There are three things I will remember, first thing is the differences between intermittent reinforcement and continuous reinforcement. The fact that Using intermittent reinforcement is more effective than continuous reinforcement. Second thing is the differences between variable ratio and fixed interval. These were unfamiliar concepts to me so I was confused. I think I have to recognize some differences that variable ratio elicit faster learning, while fixed interval elicit slower learning. Lastly, I will remember many examples in this section. These examples will be help me to understand new concepts and knowledges.

I would like to go over how can schedules of reinforcement applied to the real world.

Terms: continuous reinforcement, intermittent reinforcement, elicit, variable ratio, fixed interval

This section not only have a lot of new information about reinforcement but also enable to have expanded knowledge of reinforcement. Especially, I liked the concept that is continuous reinforcement. I could find some examples of continuous reinforcement easily. I did not recognize my behavior related to continuous reinforcement before learning this chapter.

This chapter relates to our previous chapters which are reinforcement and diverse types. There are three things I will remember, first thing is the differences between intermittent reinforcement and continuous reinforcement. The fact that Using intermittent reinforcement is more effective than continuous reinforcement. Second thing is the differences between variable ratio and fixed interval. These were unfamiliar concepts to me so I was confused. I think I have to recognize some differences that variable ratio elicit faster learning, while fixed interval elicit slower learning. Lastly, I will remember many examples in this section. These examples will be help me to understand new concepts and knowledges.

I would like to go over how can schedules of reinforcement applied to the real world.

Terms: continuous reinforcement, intermittent reinforcement, elicit, variable ratio, fixed interval

This section is a doozy. It is definitely more difficult to take in and understand than previous sections. The terms are a little bit confusing (variable ratio vs. variable interval, etc.). It will take another read and some high class lecturing from you Dr. O to fully grasp these terms and concepts.

This section obviously relates to previous ones because it discusses reinforcement. Only this time, we are talking about schedules of reinforcement. Does a behavior receive reinforcement every time it is emitted, every once in a while, after a certain number of times, or after a period of time? This is the idea behind continuous reinforcement or intermittent reinforcement. There are also terms used to describe the types of intermittent reinforcement. They are: fixed, variable, ratio, and interval.

I will remember that it will take some work to know these like you want me to know them. But for now, I know that reinforcing behaviors takes on a schedule and that schedule is different relevant to how many times or when that behavior is reinforced. I also know that when a behavior is reinforced after a set number of times of being emitted, that is called a fixed ratio schedule of reinforcement. Lastly, I understood the notation used to represent the schedules. I now know that FR20 means that the reinforcement will occur on the 20th time of an emitted behavior and that it is a fixed ratio.

I believe that these can be applied to the real world. You said it in the book. The examples prove that this occurs in real life, and many of us are not aware.

Terms Used: variable ratio, variable interval, reinforcement, schedule of reinforcement, emit, intermittent reinforcement, continuous reinforcement, fixed ratio, FR20, behavior

Well we're getting into the thick of it. I say bring it on. Then I read the chapter. Touche good sir. Touche. This chapter is definitely building on previous themes and it is a tad more difficult to grasp, though I say this will full confidence that the lecture will help me to grasp the new terms and usage.

This section continues with the reinforcing theme of reinforcement that was previously delved into, this time focusing on an alternative to continuous reinforcement. Schedules of reinforcement discerns between continuous reinforcement and intermittent reinforcement. Intermittent reinforcement, unlike continuous reinforcement, can happen in a few different ways. These variations are fixed, variable, ratio, and interval schedules.

One thing that I will remember is that reinforcement functions on different cycles based on the amount of times that reinforcement is given for a given behavior. Fixed ratio schedule of reinforcement depends on the amount of times that a behavior happens before it is reinforced. To go along with Fixed ratio schedule is a particular notation. If a behavior is being reinforced every 7 times that it happens, this is shown to be FR7.

I believe that all behavior modification can be utilized in the real world. Some subjects will respond better to certain schedules of reinforcement as opposed to others, just as some subjects will respond better to punishment rather than reinforcement.

Terms Used: Reinforcement, Schedules of Reinforcement, Intermittent Reinforcement, Continuous Reinforcement, Fixed Ratio, Fixed Interval, Variable Ratio, Variable Interval, FR7

I liked the abundance of examples in this section and the opportunities to come up with examples of my own – hopefully I did them correctly! This section relates most obviously with the reinforcement section considering it is all about reinforcement. Schedules of reinforcement are used all the time and often times without our knowledge or realization.

The first thing that caught my attention was that behaviors do not need to be continually reinforced. Obviously, if reinforcement is ceases altogether, the behavior will become extinct, but constant reinforcement is not always necessary. For example, I don’t tell my roommate thank you every time she does something for me but she continually emits the behavior regardless because I do intermittently reinforce her.

I have heard/learned about these schedules many of times, but I have never been taught to think of ratio in terms of number of behaviors and interval as the amount of time passing between the reinforcement. To me, these definitions are easier to remember and I’m hoping they will help me because no matter how many times I have been taught these schedules, I never internalize them! In a lot of my classes, I learned them well enough to get me by the test and barely brushed over them because they were never the central information on the test making them less important to me. In this class though, they are very important and will probably be seen over and over again.

Lastly, I thought that the response rates of each type of schedule was interesting. Variable ratio schedules elicit higher response rates when compared to all other scales, whereas fixed interval schedules elicit a slower response but nonetheless will eventually reach reinforcement. Fixed ratio schedules also have a good response rate followed by a variable interval scale which has a decent response rate. To my understanding, the effectiveness of the scales goes like this:

1. Variable ratio
2. Fixed ratio
3. Variable interval
4.Fixed interval

It is obvious that these schedules of reinforcement are very applicable to the real world because they are used all the time (and we wouldn’t have to learn about them if they weren’t important!). Casinos are a classic example of how these schedules are used in the real world because their business relies on people being reinforced whether it be on an ratio/interval or fixed/variable schedule. Without being reinforced, gamblers won’t come back; however, the business cannot thrive if it is continually doling out reinforcements!

Honestly, I wouldn’t object to going over all of the examples in the text in class to double check my work. To me, these schedules are confusing and I would not be too surprised if I made mistakes, not to mention I had a hard time coming up with some examples!

Terms used: schedules of reinforcement, reinforcement, reinforced, extinct, emit, intermittently, ratio, interval, variable ratio, elicit, fixed interval, variable interval, fixed interval

I liked that this section brought a lot of new ideas to our behavior modification vocab. It dug a lot deeper into reinforcement theories, which is something I really like the idea of learning more about. Continuous reinforcement is a very easy concept to grasp, so being able to learn about ratios and intervals was nice. It gives me more of an understanding of how to describe what goes on within every day reinforcing occurrences.

This section related to previous sections in that it ties together a lot of ideas involving reinforcement. It digs deeper into those ideas, like I said above, and gives further knowledge on how schedules of reinforcement tend to work in the real world.

three things i will remember:
1. continuous reinforcement refers to getting reinforced every time a behavior is emitted.
2. ratio= number of times behavior is emitted, interval= amount of time between each reinforcement period.
3. fixed= set amount of behaviors/time. variable= average amount.

I definitely would like to go over quite a few examples of FR, FI, VR, and VI. I understand them all fully well, but the more real life examples I can wrap my mind around, the better.

I definitely believe schedules of reinforcement can be applied to the real world. It's pretty obvious just from reading this section that it has a ton of real world ties. All the way down to work and pay and how we achieve money, our primary secondary reinforcer. So many things in life revolve around schedules of reinforcement.

terms- schedules of reinforcement, reinforcer, reinforcement, FR, FI, VR, VI, fixed/variable, ratio/interval, continuous reinforcement, emitted, behavior modification

In this particular section I liked how the answers to the reading activities were given at the end. It was difficult for me in the previous chapters to know whether or not I was actually understanding what was going on because I had no feedback on the responses that I had. This time I was given an idea of what I did and didn't quite understand.This section is related to what we have already learned in that it adds to what we previously knew about reinforcement. Rather than just having a general idea of what reinforcement is we now know details about the different types of it.

Three things I will remember from this chapter are the different types of reinforcement schedules.
1. Continuous reinforcement, which is when you are reinforced by a behavior every time you emit it. Such as when you are typing, you are reinforced every time you push a key.
2. Fixed or Variable reinforcement. Fixed is when the reinforcement is held constant, for example, for every 15 times a particular behavior is emitted you will be reinforced. Variable schedules typically have a set minimum and a set maximum, and the reinforcement will be given on an average. Variable schedules provide less predictability.
3. Ratio or Interval. Ratio is contingent on the number of responses required, meaning you must emit a certain behavior a certain number of times in order to receive the reinforcer. Interval is contingent on a certain amount of time. Reinforcement depends on the amount of time passed.

I do think that schedules of reinforcement can be applied to the real world because we are influenced by reinforcement all the time. After reading the section and actually understanding what the schedules of reinforcement actually are I'm not sure how someone could say that it isn't applicable.

Terms: Schedules of reinforcement, reinforcement, reinforcer, reinforced, Ratio, Interval, Fixed, Variable, Continuous reinforcement, emit,

There are two main things I really enjoyed about this section. The first one is under some of the places where we need to provide a response, there were answers. This is a positive reinforcement because answers are added to the target behavior of the students responding to text example boxes, even if we get it wrong, we still learn what the correct response is at that time and don't have to continue reading while confused. The other thing I really liked was the great detail the schedules of reinforcement were broken down. I really enjoyed that a definition was given, examples were explained, then you continued to go deeper by giving graphs and explaining the symbols (letters) associated with each type. These two stand out however there are other things that were different, I believe, for the better. This section allowed for more reflection which was nice because it gives us a chance to see what we need to work on. Also, asking for similar subject examples but written in different ways was very helpful.

It related to the sections we read previously by following up on reinforcement, or the increased frequency of a behavior being emitted. We also learned that depending on the schedule of reinforcement, different behaviors are asked to be elicited. For example, after an organism is reinforced, there will be a break period until it gets closer to the next reinforcement. This occurs in a fixed interval schedule, or after a specific time passes, a reinforcement is given.

Three things I will remember from this chapter are the difference between ratio and interval schedules of reinforcement, variable and fixed schedules of reinforcement, and that there are continuous reinforcers. Ratio schedules are based on the number of target behaviors that must be emitted before a reinforcement is given; whereas interval schedules are where an amount of time is passed before a researcher expects a behavior to be elicited, which when it is, reinforcement is given. Variable, on contrary to what I previously thought, is the average amount of either time or number that passes by before a behavior is reinforced. Fixed is when a set of behaviors or amount of time has passed that a reinforcement will be provided. I knew continuous reinforcers occurred, however, I never knew how apparent they were. Every time one walks outside on a nice day and the sun is a reinforcer to the way they are dressed, or when someone is tickled and it makes them laugh.

I do think there are many instances that the schedules of reinforcement can be applied to the real world. For example, buying and scratching lottery tickets in hopes to win is a variable interval schedule of reinforcement because after an average of number of tickets bought is there a winning ticket, but people still buy them even though it's not a continuous reinforcement. An example of a fixed interval would be after every two weeks, a worker receives a pay check. A specific amount of time passes and a reinforcer is administered. Switching gear to an example of a fixed ratio reinforcer is getting a free coffee after buying 5 cups of coffee. After a specific number of behaviors is emitted, buying coffee, a reinforcement is given, a free coffee. Lastly, a variable ratio is playing at a slot machine in the casino. Obviously they can't have the setting that after every 30 pulls there will be a winner because people would catch on. However, one time it may be 20 pulls, the next it may be 45, then it would be 40, then 31, and so on. The average of each time a behavior is emitted, a reinforcement is given.

The only thing I believe I have a question on is if schedules of reinforcement could work as a schedule of punishment. I know Skinner believes people learn better if they are reinforced rather than punished but I feel as if variable interval punishment is given to a behavior, the behavior will quickly stop.

Terms: reinforcement, punishment, variable, fixed, ratio, interval, target behavior, emit, elicit, reinforcer

Leave a comment

Recent Entries

Reading Activity Week #1 (Due ASAP)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Welcome to the behavior modification hybrid class. We would like…
Topical Blog Week #1 (Due Friday)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 By now you should have completed Reading Assignment #1. This…
Reading Activity Week #2 (Due Monday)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Please go to the following blog page: http://www.psychologicalscience.com/bmod/abcs.html Please read…