Reading Activity Week #6 (Due Monday)

| 26 Comments

 

Please read sections 2.5 and the handout.

Please respond to the following questions and try to use the terms and concepts from the current sections as well as the terms and concepts you have learned so far as you respond to the following questions.

Which section did you like the most? Why? Which section did you like the least? Why? What do you think is the most useful piece of information from section 2.5? Why? Most useful from the handout? Why?

Prior to reading these sections, what did you think about behavior modification?  Why? What are three things you will remember from what you read in the sections? Why? How has reading the sections changed what you originally thought about behavior modification? How so?

Finally indicate two topics or concepts that you would like me to cover in more depth in class.

Include a list of the terms and concepts you used in your post. (example - Terms: positive reinforcer, extinction, reinforcer, discriminative stimulus...)

26 Comments

Since we only had one section to read, 2.5 would have to be the most interesting section we had o read. There was a lot of useful information from this section and lots of new terms that are quite useful. I believe that the most important information to take from this section is the concept of continuous reinforcement. Continuous reinforcement is a ration schedule where each response is reinforced. With continuous reinforcement, each time we emit a behavior we get reinforced with something. Continuous reinforcement helps to get behaviors under stimulus control. It is also important to remember that reinforcement could be intermittent reinforced.
I actually have four things that I will remember from this chapter and those are the classifications of intermittent reinforcement. Those four types of reinforcement are fixed or variable and ratio or interval. Ratio schedule is when there is a certain number of responses are required for reinforcement. When you think of ratio we should think of numbers. Interval reinforcement is when the reinforcement comes after an interval of time. A certain amount of time has to pass before the reinforcement is given. In a fixed schedule of reinforcement, there is a certain number or amount that is held constant before the reinforcement is given. Variable schedules have a set minimum and a set maximum (range) until the subject is reinforced. These classifications create four types of schedules; fixed ration, variable ratio, fixed interval, and finally variable interval. Fixed ration is a fixed number of times. Variable ratio is when something is reinforced after a certain amount of time. Fixed interval schedule pay-out after a fixed or constant period of time. Variable interval schedules often pay-out within a range of time. There is a minimum and maximum time range.
This section has really helped me to understand the many different types of ways a person can reinforce. All these schedules are very effective and can help control a behavior.
Terms: continuous reinforcement, fixed, ratio, interval, variable, fixed ratio, variable ratio, fixed interval, and variable ration, reinforcement

I really enjoyed reading section 2.5 on Schedules of reinforcement. Before reading this chapter I asked myself about intermittent reinforcement because I know that not every behavior is reinforced each and every time. I encourage my daughter to do a lot of things and try to verbally praise her when she is doing well, but there are times I don’t and she still continues to do it. I do however, at bed time tell her what I was proud of for that day. This is a good example of continuous reinforcement (CR) behaviors she did that day, even though they were intermittently reinforced throughout the day. I tried to come up with examples of each one of the schedules in my daily life. For a fixed ratio schedule (happens every time after a certain number of responses) I came up with the scoop system we have for my daughter. Pleasurable behaviors lead to her getting scoops on a cotton ball made ice cream cone on the fridge, and when she gets 5 scoops, she gets a quarter. This happens for every 5 scoops she gets and it doesn’t matter how long it takes her to get 5 scoops, it could be every day or every 3 days. For a fixed interval schedule I came up with bedtime during the week. Every night at 8:45 my daughter is in bed. She knows when it is getting closer to her bed time and knows what the clock looks like at 8:30 and at bedtime. This is something that we do every night at the same time, crawl into bed, read a story and talk about the day we had and what is coming up. This happens each night throughout the week and is expected from everyone in our house. For a variable interval schedule I thought about ordering a pizza, when they tell you it will be there in 20-30 minutes, although it may be there in 19 min sometimes, or 26 min, it is usually between 20 and 30 min from the time we call to when it gets to our door. For a variable ratio schedule I will use the same example, ordering food. Almost every time we order from a fast food restaurant drive thru (we are sort of picky and have a lot of “hold the…” and “instead of…”) our order is usually right, but there are sometimes we get home or after we pull away we realize we didn’t get something or they messed up on what we asked, and we have to go back or call. This is a variable ratio schedule because we never know which time it will happen, but it always does, it may be twice in a row, and then may not happen for ten times after that. I would like to hear more examples in class of variable ratios.
Terms used: Intermittent reinforcement, Continuous reinforcement, Ratio schedule, interval schedule, fixed ratio, variable ratio.

I really liked section 2.5. It contained a lot of information I had not previously thought about in my pondering of behavior modification, so far throughout this class.
It is easy to understand what Ratio(#), Interval(time), Fixed(constant), and Variable (average) mean, but puting them together and seeing how they effect behavior I found interesting. THe most useful information came from the chart that mapped out the schedule of reinforcements, their response rates, post-reinforcement pause and other. This really helped sum up all the reading material. I found it useful to know that VR(variable ratio) will produce the greatest amount of responses, meaning it elicits faster learning.
Like I said previously, I had not really pondered these things. I was just looking at which form of punishment or reinforcement worked best in eliciting the behaviors I do or do not want to see. So this helps me know that sometimes it takes a schedule to achieve desired outcomes.
I will probably remember or rather recognize what a graph will look like for each scheudle of reinforcment. I found this to be a useful tool, if I am ever evaluating a graph dealing with these tools. The idea that VR works best because the subject will continually be emitting the behavior because theyknow it takes around a certain number of tries to get what they want, and time doens't really matter. Also, that not matter which schedule of reinforcement you are using, they all have access to the same out of reinforcers. Definately the idea of intermittent reinforcement is important to remember because it is more desirable and effective. If reinforcement only has to happen once or once in a while in order to modify a behavior it is of course ideal, faster learning!

Post-reinforcement pause and ratio strainseems very similar to me. Obviously, ratio strain deals with the number of times a behavior is emitted, but is ratio strain just a specified term to schedules involoving ratio that go through post-reinforcement pause?

Terms: Emit, elicit, fixed, interval, ratio, variable, VR, Ratio strain, post-reinforcement pause, reinforcement, punishment, intermittent reinforcement.

Since there really was only one section, 2.5 was the most interesting one. But even had there been another section, I still think I would have liked this section most. Highly informative with knew knowledge that I hadn't even would have thought of. Specifically the types of scheduled reinforcement; variable = average, fixed = constant, ratio = #, interval = time (VI, VR, FI, FR). Seeing how each schedule of reinforcement actually elicits a different learning response I found interesting and a key bit of information. Tells us as individual organisms how we could perform a scheduled self-reinforcement.

Prier to this section, it had never occurred to me that there different types of reinforcement scheduling. Looking at it now, I can defiantly see why differing types of scheduling would result in differing behavioral responses rather then just one type like I had first thought. The things I will remember about this reading is the defiantly the different types of scheduled reinforcement. How even this different types of reinforcement are evident in our lives, and how these types of reinforcement have their own learning responses. Once more as I read more and more about behavioral modification I am gradually learning just how deep the subject goes and how every day behaviors can be broken down.

Terms: behavior modifications, reinforcement, emit, elicit, ratio, interval, fixed, variable, FR, FI, VR, VI, response, organism, self-reinforcement

Section 2.5 had a lot of new material that was interesting to read and learn about. What I enjoyed most about this section was learning about fixed interval, Fixed Ratio, Variable interval, and Variable Ratio how we can use those techniques to change up reinforcement. It helped me understand more how we can use these techniques to do intermittent reinforcement rather than continuous reinforcement. We can’t always do continuous reinforcement; it would be next to impossible so using those techniques is a good way to intermittently reinforce an animal or person.
The part I didn’t like that much about section 2.5 is the parts about the graph. I didn’t really get how the graphs worked and they didn’t make much sense when I tried to decipher them. I think if it was explained a little better on how each part of the graph worked I would have been able to understand it more.
Prior to reading this section I had no Idea that we could use intermittent reinforcement rather than continuous reinforcement. I always thought we need to reinforce constantly for a thing to emit a certain behavior we wanted. But through the techniques of CR, FI, FR, VR, VI I learned we can use these techniques to shape behavior. The example that helped me understand this was the bartender not getting tipped for every drink but the more she pours the more likely she will be tipped-VR.
Three things I will remember from the chapter are fixed interval, fixed ratio, variable interval, and variable ratio. I learned from these terms that we can use them for intermittent reinforcement so we don’t have to consistently reinforce an organism.
Reading these sections has changed my thoughts on behavior modification immensely. I know how to change an organism’s behavior by using reinforcement and punishment. I know how to use continuous reinforcement or intermittent reinforcement to shape an organism’s behavior. I know how to now use FR, FI, VI, and VR to shape behavior. If I want a behavior to go away I can stop reinforcing it so that extinction occurs. I know not give into extinction bursts which would reinforce the behavior I don’t want if I did give in. I basically understand the ground work of how behavior modification works now.
I would like you to go over the graphs more and FI, FR, VI, VR.
Terms: FI, FR, VI, VR, reinforcement, punishment, emit extinction, intermittent reinforcement, and continuous reinforcement.

Section 2.5 was really interesting. Everything we do is a behavior and we are constantly being reinforced, whether continuously or intermittently. Continuous reinforcement and intermittent reinforcement are easy for me to understand. With continuous reinforcement, the individual is being continuously reinforced to keep doing a certain behavior. This is important to use when trying to get a certain behavior started. Once this continuous reinforcement is stopped, extinction may occur. Intermittent reinforcement is reinforcing every now and again, such as playing a slot machine. You don't win every time, but by keeping on playing, you are likely to be reinforced after a few tries.

I think the most useful piece of information from this section was the four types of intermittent reinforcement. The four types are ratio, interval, fixed, and variable. Ratio is when a certain number if responses are required for reinforcement. An example of this is the number of times it takes to try to start your vehicle. I can easily apply this concept to my life. My truck does not usually start on the first try, so I have to try again. I never know when it will start. Interval requires a certain amount of time to pass before being given a reinforcement. An example of this is how long it takes your friend to answer your call. Fixed is exactly as it sounds: a fixed amount held constant. And variable is when it varies. I was confused on the difference between fixed ratio/fixed interval, variable ration/variable interval, so that is something I would like to go over in more detail during class. Also, I would like to discuss the different graphs that were used in this section during class.

Three things that I will remember from this section are 1: continuous reinforcement and intermittent reinforcement, 2: in order to emit a different response pattern, you need a different reinforcement schedule, 3: fixed ratio=FR, VR=variable ratio, FI=fixed interval, VI=variable interval

Terms used: continuous reinforcement, intermittent reinforcement, extinction, ratio, interval, fixed, variable, emit

The part about section 2.5 that really struck my attention is that animals have biological continuity. This refers to the belief that all living things in the world can be arranged according to their complexity. The learning mechanisms in lower animal species are similar to the learning mechanisms in higher order animals. I found this most interesting because in my gender and anthropology class that is also directed towards psychology, we have learned about the similarities between monkeys and humans. It is interesting to know that we learn so much about our human behaviors just by studying animals that have common behaviors and mannerisms. We learn these traits about animals and then associate them with those of humans to teach us more about ourselves than what we knew.

The complexity of humans appears to be at the top of the pole, but because of this, we can easily learn from animals that may have the same thought processes as humans. Animals can’t exactly learn from humans in a sense that they get their behavior from us. I had never put any thought into this way of learning about behavior and the fact that we learn so much just from studying and observing animals. I find it interesting that monkeys can have different discriminative stimuli then humans and it seems as though they automatically know that they should emit the appropriate behavior. I say this because in studies about monkeys, most monkeys of the same kind act in the same ways and are easy to observe based on their natural behaviors. Humans, however, have discriminate stimuli and make different decisions based on whatever behavior they feel like emitting.

Some things I will specifically remember after reading this section are intermittent reinforcement, biological continuity, and continuous reinforcement. I would like to go over the topic I did today, biological continuity because I am interested in learning more about it. The terms I used are emitted and discriminate stimuli. (I will use some more next time I promise I just got a little sidetracked!)

So, I guess my favorite section is going to have to be 2.5. Unless I totally missed the handout? I’ve heard the fixed/variable ratios/intervals stuff before, but not in any great detail. I think we touched on it briefly in my intro class four semesters ago. I haven’t seen the graphs of how effective each type was. I had previously thought that the effectiveness of a reinforcement schedule depended on the organism instead of there being a general hierarchy of effectiveness. Variable ratios are the most effective form of reinforcement schedules because the organism has to actually emit the behavior for the reinforcer to occur as opposed to sitting around waiting for any period of time for the reinforcer. This ensures that the target behavior is being emitted and learned. The variability of the reinforcer keeps the organism from showing post-reinforcement pauses before emitting the target behaviors again. Intervals simply wait for the passage of time before reinforcing, so the organism could just wait until the allotted amount of time had passed before emitting the behavior again and would therefore learn a time element of the behavior and thus not emit it as much. With a fixed ratio or interval, the organism simply pauses trying before showing the behavior in attempt to get reinforced when a certain amount of time or number of behaviors had passed. I think those four things will stick out most, especially with the visuals to help me know the difference.

I would like to go over ratio strain in class. I would also like to talk about why the scalloped and stair-step patterns occur.

Terms: fixed, variable, ratio, interval, emit, organism, target behavior, reinforcement.

The section that I enjoyed the most was section 2.5 since it was the only reading assigned. I felt the most useful information was on intermittent reinforcemet.The reading described intermittent reinforcemnt as when reinforcement is so powerful that the behavior emitted doesn’t need to be reinforced every time (continuous reinforcement) and will be just as effective when reinforcement occurs every once in awhile. The reading mentioned that intermittent reinforcement is more desirable and effective. Prior to reading this section I believed that continuous reinforcement would be the best form of reinforcement. After completing the reading I now understand that initially, continuous reinforcement is desirable so the individual emitting the behavior can correctly identify what beahvior you want them to emitt and they also estblish stimulus control. However, continuous reinforcement can become problematic when a behavior is not reinforced every time which leads to extinction. I now understand that intermittent reinforcement is actually considered to be more effective and desirable. Four things that I will remember after reading the section are the types of intermittent reinforcement: fixed interval, variable interval, variable ratio ,and fixed ratio. Ratio refers to a certain number of responses needed for reinforcement and interval refers to a certain amount of time that needs to pass before reinforcement is given. Fixed refers to a constant amount and variable is a speratic amount . I would really like more examples of the types of intermittent reinforcement as well as explanations about the graphs.

Terms: intermittent reinforcement, continuous reinforcement, emit, fixed interval, variable interval, variable ratio, and fixed ratio.

Interval schedules require a certain amount of time to pass. The reading used an example of a mother who is always home when you call, but it's a matter of how much time passes before she answers the phone. Both the number of times or the amount of time can be fixed or variable. FR = fixed number of times, VR = an average amount of time (a bartneder doesn't get a tip every time he makes a drink), FI = pay out after a fixed amount of time (getting paid every two weeks), VI= a range of time (bartender asking if you need another drink).

I like how you broke things up! I tend to get them confused on so many levels!!

I thought Section 2.5 had a lot of useful information in it. The information about the different types of reinforcement schedules was very intriguing. The different techniques that were explained on how to use the different types of reinforcement strategies proved to be valuable information associated with the class. We have learned the terms and processes of behavior modification but now that we have been introduced to different techniques of reinforcement schedules and how to use them, we can actually put two and two together and make up a more elite but broad concept of Behavior Modification as a whole. The thing that I will take away from this section is that continuous reinforcement depending on the ratio, can be fixed or varied, and that as an interval it can be changed as well to emit a different reinforcement schedule depending on what target behavior the researcher is looking for.

I guess I liked 2.5 the most! It had a lot of interesting information in it. I thought it was very informative to read about. There were a lot of new terms in this sections that I really found enjoyable to read about and was very helpful when learning about behavior modification. So this section helped a lot! The most useful information from this section would have to be the different terms used. Especially the terms that were stressed throughout the section, like continuous reinforcement, ratio, and interval schedules of reinforcement, and fixed, and variable schedules of reinforcement.
Prior to reading this section, I guess I never thought of all the different schedules of reinforcement. I guess I never really thought of all the different terms when relating it to behavior modification. There is just so much more than I thought that goes into behavior modification. Actually I’ll remember the four different intermittent reinforcements which are ratio, interval, fixed and variable schedules of reinforcements. Ratio is when a certain number of responses are required for reinforcement, such as after you try opening a door a few times and every time it opens on six tries. Interval is when a certain amount of time passes before reinforcement is given. An example of this would be when you are waiting for your food at your favorite restaurant, and you know that at 20 minutes your food will come and you just have to wait those 20 minutes (it’s over a time period). The third on is fixed schedule of reinforcement which is a fixed amount held constant. Variable schedules of reinforcement is an average, it will keep you guessing.
Like I mentioned in another blog, I can’t believe how complicated bmod is! There is so much more to it than I original thought and it keeps evolving! I’m just really surprised.

The topics that I want to talk more about is the intermittent reinforcements, especially variable schedule of reinforcement!

Terms: continuous reinforcement/ratio/interval/fixed/variable/schedules of reinforcement/intermittent reinforcement

Obviously the best section was 2.5 because it was the only one we read ;) There was a lot of interesting and useful information. I've learned about the different types of schedules of reinforcement before but it was nice to go over them again to get a complete understanding.
Continuous reinforcement was a big concept within the section and it is basically when we are reinforced each time we emit a behavior. There are many real world applications for this but the one that helped me understand the most was using a specific letter on the keyboard. Every time we press the letter "L" for example, we are reinforced with the letter on the screen. If we are not continuously reinforced it becomes troublesome and usually ends up in extinction, and we'd buy a new keyboard!
As for schedules of reinforcement there were two different types of schedules with two sub-types within them. A ratio schedule is one where typically there needs to be a certain number or responses before the reinforcement occurs. within ratio schedules there may be a fixed ratio or an interval ratio. A fixed ratio means that the number of responses would be set and fixed for each time for the reinforcement to occur. using the example of a slot machine, one that operated on a fixed ratio would pay out after exactly seven times each time. The second is a variable ratio meaning that reinforcment would occur after an average amount of times. If a slot machine worked at an variable ratio of ten it would probably pay out anywhere with every five to fifteen spins, but on average about 10.
The other schedule is known as an interval schedule, instead of a certain number of responses to achieve a reinforcment it would be a certain amount of time. like the ratio, it is also separated into fixed and variable intervals. Meaning that a response would occur after a fixed amount of time(fixed) or an average amount of time (variable).
After reading the last few sections it has opened my eyes to realizing how much there is to know about behavior modification and that it is much more complex than I realized it would be. I would probably want to go over the intermittent reinforcements just so I have them down.
Intermittent reinforcements, continuous reinforcment, fixed ratio, variable ratio, fixed interval, variable interval, emit.

My favorite section from the one section we had to read... definitely 2.5! I found all the information in this section to be very helpful and I now have a much better grasp on the schedules of reinforcement. I liked how this section broke the schedules of reinforcement down and went into a lot of detail with the terms and use of examples.
I liked the graph showing the VR,FR,VI, and FI all together so we could easily see the differences in the patterns each schedule takes. I also learned about post reinforcement pause when using fixed ratio and fixed interval.
Prior to reading this section I was ignorant as to what the schedule of reinforcement was, I had a basic idea but this section teached me a lot of useful things that I will definitely use in the future. I hope to retain all the information I just learned but three things that really stood out to me was 1. reinforcement does not need to be administered each time a desirable behavior is emitted for that behavior to be under reinforcement control, 2. post reinforcement pause is associated with fixed schedules of reinforcement, 3. VR- Variable ratio schedules elicit high rates of response with barely any pause. After reading this section I am glad to have a better understanding of the schedules of reinforcment and am still amazed at how complicated behavior modification really is.

I would like to touch on problems associated with each schedule and ratio vs. interval in class!

Terms:Schedules of reinforcement, post-reinforcement pause, fixed ratio, fixed interval, variable ratio, elicit, emit

Since we only had to read 2.5 my writing will be shorter than normal. I enjoyed reading bits and pieces of the chapter. But I however did find this chapter boring and I felt like the chapter went on forever and ever. I really understand continuous reinforcement. To me this is where you are always being reinforced for emitting a behavior. Like turning on the television or typing on a computer and how the letter that you are typing pops up on the screen.
I did not like how long the chapter was only because I felt like it dragged on forever and was very repetitive and I got more confused the more that I read. I felt like I understood it but then I started reading more and I got lost and did not understand what the reading was even trying to say.
I did however like the examples that were used to help me understand what are VI, VR, FI, and FR. I felt that these examples help me understand the concepts and put real life situations into my head. I felt that these examples cleared up my confusion at times and still allowed to be grasping the concepts and try to use the terms together.
I would like to keep learning about all the terms in this chapter!

I enjoyed section 2.5 because I want to eventually use behavioral techniques to modify my own behaviors, and learning which reinforcement schedules are most effective is really encouraging. To that end, I think the most useful piece of information in this section was the graph comparing the reinforcement schedules, because I can select a reinforcement schedule to fit my goals. Do I want to emit the target behavior as much as possible (VR) or steadily but moderately (VI)? Either way there is a reinforcement schedule that will work.

Prior to reading this section, I knew from Intro to Psych that VR was the most effective schedule, but I never really knew why. I hope to retain the definitions of each reinforcement schedule, the relationship between continuous reinforcement and other schedules, and the ability to recognize which reinforcement schedule is in use in a particular situation. Reading this section has changed my understanding about the role of continuous reinforcement. I had previously thought that CF was merely ineffective reinforcement, but now I realize that it is necessary to establish the reinforcement schedule.

I would like to go over examples of each reinforcement schedule and talk more about the role of continuous reinforcement.

Terms: reinforcement schedules, VR, VI, Continuous reinforcement/CR.

SO far I have liked all the sections. Some seem to be a little repetitive. However, I think that repeating useful information is required for learning. That is what this whole class is about. Finding different ways to modify behavior in a variety of different settings such as a classroom or even at home or work. Section 2.5 has been my favorite so far. Although I will probably have to read it a couple of times, I think it was the most useful section so far. What I liked the most about section 2.5 was the real life examples after nearly every term. Specifically variable ratio and variable interval. I have been working in the restuarant business since I was 16 and I can easily relate to these terms. The interesting part is that I have already experienced these situations. The only difference is that I never knew what I was doing had an actual term for it. Much of this material is innate, it involves things we already do and don't even realize we are doing it.
Using a pay period of every two weeks as an example of fixed ratio was perfect. Everyone can relate to this siuation.
I will always remember the difference between intermittent referring to seldom reinforcement and continuous referring to reinforcement that occurs everytime. Also that intermittent reinforcement does not always result in extinction. The best examples in this section were the bartender tips and slot machine. These examples alone will help me remember the difference between ratio and interval.
I was curious about using CR for continuous reinforemnet. Doesn't CR also apply to conditioned response.

I would like to cover ratio strain and biological continuity further in class.

Terms used: Variable ratio, variable interval, fixed ratio, intermittent reinforcement, continuous reinforcement, extinction, ratio, interval, conditioned response.

Section 2.5 was by default the most interesting section we had to read. Schedules of reinforcement have always been something of interest to me. I have known the different between continuous and intermittent from previous classes, but the variable/fixed interval/ratio has always been somewhat confusing to me. The section went over all the material, but I am trouble grasping the different between them, maybe more examples would be beneficial. It never dawned on me before reading this section that those four different types of reinforcement are only intermittent reinforcement; it does not apply to continuous reinforcement. Most reinforcement is intermittent because it would be next to impossible to be able to reinforce someone every time he/she does the wanted behavior.

Biological continuity was also an interesting concept that I had never heard before taking this class. Biological continuity is the belief that all living things can be arranged according to their complexity. The learning mechanisms in lower animal species are similar to the learning mechanisms in higher order animals. I would like to cover biological continuity more in class thought, but it makes sense then why experimenters use animals like rats to test out theories that would be applied to humans.

I would still like to go over biological continuity more in class. I also wonder if there are any situations where it would be better to use continuous reinforcement than intermittent reinforcement. The things I will remember from this section is the difference between continuous and intermittent reinforcement, I will remember there are four types of intermittent reinforcement and biological continuity.

Terms used: continuous/intermittent reinforcement, biological continuity, variable, fixed, interval, ratio

I liked all the examples given in the reading to help to connect the new terms and real world settings to use the correct terms. The most useful piece of information from the section would be the terms, VR: variable interval, CR: continuous reinforcement, VI: Variable interval, FI: fixed interval and FR: fixed ration, that were given, they made it easier to understand what they are and the situations used made it easier to make the connection. The part I didn’t like was actually how many new terms that were thrown at us, it would get easier to use and understand them, it is just a lot of info to take in and understand how to explain, identify, and use the new information/terms.
I’ve come to realize with all the different sections we are coming is that we all work for something with our behavior and reinforcement is used in many different ways and settings, this sections just really points out how much reinforcement is really used. Reading this section has made me realize, there are so many ways to modify and understand behavior and as we go we learn more and more and make a lot of connections and learn how to understand and identify certain behaviors, what works for some, and what doesn’t work for others.

terms used: FI,FR,VI,VR,CR

Well we only read one section, so 2.5 I liked the most. Something I liked about it would be learning that there are so many different ways to schedule reinforcement. I previously thought of reinforcement as something you would perform directly after a behavior was emitted, but this section shows how many different instances and varied ways it can be used.
There wasn't a section of it that I liked the least but a lot of it seems like it can run together pretty easily. I feel that it may be a struggle to keep FR, FI, VR, and VI apart from one another and to know when it is best to use each one.
The most useful information would be the point that continuous reinforcement schedules are important to initially get a behavior under stimulus control. This pretty much means that you are able to continue to emit a behavior without getting reinforced each and every time. This then leads into the different types of reinforcement schedules we emit in order to elict the desired stimulus response.
Prior to reading this section, I was unaware of the different types of reinforcement that could be applied to altering behaviors and how to determine when the elicted responses will occur.
Three things I will remember are:
1. Ratio refers to the number of behaviors emitted while interval refers to the amount of time passing between each reinforcement.
2. Fixed refers to a set amount of behaviors (ratio) or a set amount of time (interval) while variable refers to an average amount (it will often include a range of
acceptable values).
3. Different reinforcement schedules will elicit different pattern of responding.
Reading this section has taught me that if you vary the type to reinforcement you emit, you will elect different rates in which behaviors will occur.
I would like to go over ratio strain and biological continuity further in class.

terms: elict, emit, reinforcement, emitted, variables, behavior, fixed

By default I enjoyed reading section 2.5 the most because it's the only section we were assigned:) The most interesting piece of information in this section was the different types of reinforcers. There is continuous reinforcement (seldom reinforced for every behavior) and intermittent reinforcement (reinforcing every now and then). At first I thought these two seemed very similar until the example(s) were given. The example for a continuous reinforcer was extremely helpful because it discussed how extinction occurs when a continuous reinforcer lacks to occur. This is also beneficial for intermittent reinforcement because sometime of extinction always occurs when one is no longer reinforced for something they previously were. The example for intermitted reinforcement was extremely helpful and I related it to extinction burst. Because whenever I call someone in 'dire' need of something, I'll keep calling and calling until the behavior is reinforced (aka they answer). The behavior that I am emitting when do I that is an extinction burst. However, I was wondering if test taking is comparable to an intermitted reinforcer? Because, depending on how much you study, you could be positively or negatively reinforced. Thus causing you to not ALWAYS being reinforced every time...just a thought.
Howver, I'm still slightly confused on the Ratio vs. Interval and Fixed vs. Variable. I'm probably putting more thought into these and making them more complex. I understand that ratio refers to the number of time and interval refers to passage of time, however I'm still slightly confused on fixed and variable.
I learned a lot from this section and the things I'll remember the most are: the differences of continuous and intermitted reinforcement (described above), operant chambers, and biological continity.
Terms: fixed, variable, ratio, interval, extinction, extinction burst, emit, continuous reinforcement, intermitted reinforcement

The topic I am going to discuss from section 2.5 is Continuous Reinforcement (CR.) The reason I have selected this is because I think it is interesting looking at my daily life and discovering what subconsciously gives me CR. Besides the mentioned computer use and TV remote, I thought about the toilet. How frustrated I feel when it doe not flush. Or when the car wont unlock using my clicker until I get 2 ft away.

Some of the new terminology we were introduced to seemed intimidating at first, but upon further reading I because less timid. I can remember that ratio means how many times a behavior was emitted during an interval, which is the time period between reinforcements. But it is important to remember that different reinforcement schedules will elicit (or make) different patterns of responding. A high steady response rate with marginal pauses occur with VR, more pauses but still high response rate in FR. Response will lower to moderate with marginal pauses in VI, and will gradually come to a considerable pause in FI.

I am very confused about VR, FR, VI and FI. I understand CR, but I think with a little more reading I will comprehend fully the meaning.

Terms: Continuous reinforcement, emitted, elicited, VR, FR, VI, FI.

There was a good amount of information in section 2.5. the thing I found most interesting was about the differences between what we had been reading about in continuous reinforcement vs intermittent reinforcement. That last chapter had us focus on a person always being punished for their behavior, however in most real life circumstances that wont be likely. This is why I found the intermittent reinforcement so interesting. By reading 2.5 I was able to get a understanding of fixed reinforcement, variable reinforcement, fixed interval, and variable interval.
When it comes to my future in teaching these will be important tactics I can use to better discipline my future students to get the best learning experience out of them. With fixed reinforcement the punishment would come if the student speaks out for ex. 5 times during a class. This may not be the best way to use punishment because the student would be able to figure out soon that he can speak up 4 times before he would ever get yelled at. With variable reinforcement the student would be punished for speaking out on an avg. of the 5th time. This would be a better form of punishment because it would keep the student guessing. It gives them a little leeway to get away with a little talking but doesn’t allow them to know when and if they will be punished for it. When it comes to fixed interval a student might be given a early recess for doing homework in class for 10min without distracting anyone else around him. In this form of reinforcement the student knows he must atleast be quite and work on his school work for that fixed amount of time. With variable interval the same students would get the reward on the avg. of the 10th min. this will keep the students on there heels not knowing when they will be let out. This could be more helpful because they students don’t feel like they always have to wait so long to get away from the classroom.
This section defiantly helped me to understand reinforcement in a more rounded and complex way. The things I will take away are better understanding of FR, VR, FI, and FR.
Terms: intermittent reinforcement, ratio, interval, fixed, variable

Leave a comment

Recent Entries

Reading Activity Week #1 (Due ASAP)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Welcome to the behavior modification hybrid class. We would like…
Topical Blog Week #1 (Due Friday)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 By now you should have completed Reading Assignment #1. This…
Reading Activity Week #2 (Due Monday)
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Please go to the following blog page: http://www.psychologicalscience.com/bmod/abcs.html Please read…