Please read the chapter assigned for this week.
(Reading Schedule: http://www.uni.edu/~maclino/hybrid/sp_book_s11.pdf)
After reading the chapter, please respond to the following questions:
Of the various aspects of Sensation & Perception presented in the chapter, which did you find the most interesting? Why? Which did you find least interesting? Why? What are three things you read about in the chapter that you think will be the most useful for you in understanding Sensation & Perception? Why? What are some topics in earlier chapters that relate or fit in with this chapter? How so?
Please make sure you use the terms, terminology and concepts you have learned so far in the class. It should be apparent from reading your post that you are a college student well underway in a course in psychology.
Make a list of key terms and concepts you used in your post.
Let me know if you have any questions.
--Dr. M
From Chapter ten " Hearing in the environment" I found the most interesting topic about Interaural time difference which is the difference in time between a sound arriving at one ear versus another. Thus, if the sound is to the left, first left ear will reach that sound first and vice versa. We can tell where the sound is coming form by knowing which ear receives the sound first. Azimuth is the angle of the sound that describe locations on an imaginary circle that extends around us - front, back, left, and right. The first place where input from both ears converge is medial superior olive in the auditory system. There are different levels of ILD- intensity between a sound that comes to one ear versus the other.
The least interesting topic in this chapter is about auditory scene analysis, it shows how can we describe the sound, spatial, spectral, and temporal segregation.
The most useful things from this chapter would be learning more about the sound localization, and auditory distance perception as well as the physiology of interaural level difference. Where is the sound most intense, what degree, and what is hte way from hearing sound from the environment to recognize the sound. ( sound source, right corea, auditory nerve, brain stem, from right cochlea, section from pons, cochlear nucleus, LSO Some topic that relates to previous chapters are about the physiology of hearing and perception of hearing.
Inraural time difference
Azimuth
Medial superior olive
Interaural level difference
Auditory distance perception
Of the various aspects of Sensation & Perception presented in the chapter, which did you find the most interesting? Why?
The topic I found most interesting in Ch. 11 Music and Speech Perception was “Learning to Listen”. This section explained how experience in listening is important in developing our speech perception. From even before we are born we are tuned to hearing the speech sounds of our native language. Late-term fetuses have shown they can discriminate between different vowel sounds. Studies show that infants “filter out” irrelevant acoustic differences long before the utter speech sounds (called babbling). This “filtering out” leads to individuals having more difficulty to producing and perceiving sounds from a second language. However, if the second language is learned at the same time as the first it is easier to learn but may take more time to master each.
Which did you find least interesting? Why?
The discussion I found the least interesting was music perception. It is difficult for me to understand music. I have never learned to play a musical instrument or taken musical lessons so tones, chords, melody, etc. are somewhat foreign concepts to me. The chapter started off by explaining how pitch is important to the perception of music. PITCH is the psychological aspect of sound related mainly to the fundamental frequency. Musical pitch is an important characteristic of musical notes. To understand musical pitch it is important to understand the octave. An OCTAVE is an interval between two sound frequencies having a ratio of 2:1. Musical pitch has two dimensions: tone height and tone chroma. TONE HEIGHT is a sound quality related to the level of pitch and monotonically related to frequency. TONE CHROMA is a sound quality shared by tones that have the same octave interval. These dimensions are often represented in a helix where frequency and tone height increase as the height of the helix increases. The circular laps of the helix represent tone chroma. However, there is more to music than octaves. A CHORD is a combination of 3 or more musical notes with different pitches played simultaneously. Chords can be consonant (ratios between note frequencies are simple) or dissonant (less elegant ratios). An arrangement of notes or chords in succession is called a MELODY. Melody is defined by the pattern of rises and declines in pitch not the absolute pitch. This shows how perception is particularly sensitive to change. A music piece’s tempo and rhythm are also important. TEMPO is the perceived speed of the presentation of sounds. Listeners are often predisposed to group sounds into rhythmic patterns and rhythm is largely psychological.
What are three things you read about in the chapter that you think will be the most useful for you in understanding Sensation & Perception? Why?
The things I read in this chapter that will be most useful for me to understand Sensation and Perception are the basic components of speech production and speech perception. Speech production has three basic components: respiration, phonation, and articulation. To create a speech sound we must first use the diaphragm to push air out of the lungs, through the trachea, and up to the larynx. Air then passes through the two vocal folds made up of muscle tissue. The muscle tissue can be adjusted to change how freely air passes through them. PHONATION is the adjustment of those muscles. By varying the tension of the vocal folds and the pressure of the airflow we create different frequencies of voiced sounds. ARTICULATION is the act or manner of producing a speech sound using the vocal tract. The VOCAL TRACT is the airway above the larynx and includes the oral tract and nasal tract. We change the shape of the vocal tract by manipulating the jaw, lips, tongue body & tip, velum, and other vocal-tract structures. It is these manipulations that result in articulation of speech sounds. Speech sounds are most often described in terms of articulation. Vowel sounds made with a relatively open vocal tract. Consonants are made by obstructing the vocal tract and are classified via three dimensions: place of articulation (lips, teeth or soft palate), manner of articulation (level of airflow obstruction), and voicing (vibration or lack of vibration in vocal cords).
Speech perception is another important aspect. Speech perception needs to happen very quickly and our articulators (tongue, lips, jaw, etc.) must do many things very quickly. However, it is often difficult for our articulators to work that quickly. Experienced talkers then develop what is called coarticulation. COARTICULATION is the phenomenon in speech whereby attributes of successive speech units overlap in articulatory or acoustic patterns. Most listeners have no trouble following coarticulated speech. A signature property of speech however is context sensitivity, or the speech sounds that precede and follow the sound. CATEGORICAL PERCEPTION is important to our speech perception. Categorical perception is defined by three qualities: sharp labeling (identification), discontinuous discrimination performance, and prediction of discrimination performance based on labeling.
What are some topics in earlier chapters that relate or fit in with this chapter? How so?
The overarching theme of perception is that we are sensitive to change. Our visual system is constantly adjusting to change in light, shape, depth, etc. Just as our visual system is attuned to perceive change so is our auditory system in perceiving change in pitch, duration, rhythm, speech sounds, etc.
TERMS:pitch, octave, tone height, tone chroma, chord, melody, tempo, phonation, articulation, vocal tract,coarticulation, catergorical perception
Of the various aspects of Sensation & Perception presented in the chapter, which did you find the most interesting? Why?
I found the section that discusses sound localization to be the most interesting. This section discusses that interaural time difference (ITD). This is the difference in time between a sound arriving at one ear versus the other. We can tell whether a sound is coming from our right or left ear by determining which ear receives the sound first. There is an imaginary circle all around our head where sounds can be located. This horizontal is a Azimuth. An azimuth is described in the textbook as the angle of a sound source on the horizontal plane relative to a point in the center of the head between the ears. Azimuth is measured in degrees, with 0 degrees being straight ahead. The angle increases clockwise toward the right with 180 degrees being directly behind. Sounds will be the loudest and easiest to locate when they are at -90 degrees, which is directly outside your left ear. Or at 90 degrees, which is directly outside your right ear. The real question is how and where does all this information get processed? Well, the answer to that is surprisingly simple. The medial superior olives (MSO's) are the first places in the auditory system where inputs from both ears converge. This is a relay station of sorts in the brain stem where inputs from both ears contribute to detection of the ITD.
Another way to tell where a sound is located is by the interaural level difference (ILD). This is simply the difference in level(intensity) between a sound arriving at one ear versus the other. The text points out 3 points when using the ILD to determine the location of a sound. 1) Sounds are more intense at the ear that is closer to the sound source, and they are less intense at the ear farther away from the source. 2) the ILD is largest at 90 and -90 degrees, and it is nonexistent at 0 degrees and 180 degrees. 3) Between these two extremes, the ILD generally correlates with the angle of the sound source, but because of the irregular shape of the head the correlation is not quite as great as it is with ITD's.
Information from ILD's is processed in the lateral superieor olives (LSO's). This is a relay station in the brain stem where inputs from both ears contiribute to the detection of the ILD.
Which did you find least interesting? Why?
Something that I did not find interesting was the topic of the cone of confusion. According to the book, the cone of confusion is a region of positions in space where all sounds produce the same time and level(intensity) differences (ITD's and ILD's). Summing it all up, cones of confusion are areas in the azimuth that have similar ITD's and ILD's thus making it difficult to locate the sound. I didn't find this topic very interesting because it seemed to easy to understand and they spent too much time explaining it.
What are three things you read about in the chapter that you think will be the most useful for you in understanding Sensation & Perception? Why?
Things that I read about in this chapter that I think will be useful in understanding sensation and perception include the discussion over source segregation and auditory stream segregation. Source segregation, also referred to as auditory scene analysis, is defined as processing an auditory scene consisting of multiple sound sources into separate sound images. Auditory stream segregation is perceptual organization of a complex acoustic signal into separate auditory events for which each stream is heard as a separate event. I think that these are important concepts to know and understand about sensation and perception because most of the sounds we hear are complex sounds and it is important to know things about complex sounds as well as pure sounds.
What are some topics in earlier chapters that relate or fit in with this chapter? How so?
One that that I read in this chapter that relates to what I read in previous chapters is known as perceptual restoration effects. This concept is similar to the good continuation effect that we learned about when we learned about Gestalt and visual perception. Perceptual restoration effect basically refers to the effect of having a conversation with someone and some sound interrupts that but you still know what the person is talking about. (that was a little confusing but I know what I was trying to say it just didnt come out very well)
Terms: good continuation effect, Perceptual restoration effect, source segregation, auditory scene analysis, auditory stream segregation, cone of confusion, interaural level difference, lateral superior olive, medial superior olive, interaural time difference, azimuth
One topic that I found interesting in this chapter was on music. Music is a topic that I enjoy a lot so, this topic was one that struck close to my interests. When looking at music one of the most important characteristics is frequency. The psychological aspect of sound related mainly to frequency is pitch. So, pitch is an important concept when discussing music. Another important term when discussing music is octaves; octaves are the interval between two sound frequencies having a ratio of 2:1. In music this allows us to have multiples of the same note, such as low C, Middle C, and High C. Another important aspect to music is chords or a combination of three or more musical notes with different pitches played simultaneously. These can be broken down to consonant relationships, and dissonant intervals. Consonant intervals are in the perfect fifth and the perfect fourth, which both have clean ratios. Dissonant intervals such as minor second and the augmenteted fourth are less appealing ratios. Now on to making actual music, one key concept to all music is melody or an arrangement of notes or chords in succession. When looking at melody it is not defined by the pattern of rises and declines in pitch but rather by the exact sequence of sound frequencies. Another important aspect of making music is tempo or the perceived speed of the presentation of sounds. Another topic that I found interesting was speech production. The production of speech has three basic components respiration, phonation, and articulation. Being able to speak fluently requires a large degree of coordination amount the lungs, vocal cords, and vocal tract. One topic that I didn’t find very interesting was the area on learning to listen, I felt like this area did not have very much information. Another topic I didn’t enjoy very much was on learning words, I felt like this subject also didn’t include a lot of good information. The one topic I would like to be discussed in class is everything on music, as I am very interested in music.
Terms: Pitch, Octave, Chord , Melody, Tempo
I find the concept of perceptual restoration effects to be very interested. This is when the auditory system fills in missing information when listening to sounds. An example the book discussed is having participants listen to a sentence where one of the letters of a word was replaced by a sound. Listeners were unable to detect this and repeated back the sentence as if every letter was present. Even when the listeners were told that a letter would be missing in the sentence, they couldn’t detect which letter it was. This also happens in a simpler form by listening to pure tones and having noise interrupt them. Both are undetected. I think this is interesting because it’s amazing how our brain controls our perception, even when things aren’t or are there.
Something I found least interesting are the lateral superior olive and medial superior olive. Both are relay stations in the brain stem where inputs from the ears help in the detection of the interaural level (lateral) and time (medial) difference. I think this is uninteresting because I don’t think physiology is exciting. Although I know that each part is important in the functioning of hearing, I’m more interested in the psychophysical aspects.
Three things useful in understanding sensation and perception are timbre, auditory stream segregation, and azimuth. Azimuth is the angle of a sound source on the horizontal plane around our heads. It tells us where the sound is coming from. Timbre is the psychological sensation where someone can judge that two sounds that have the same pitch and loudness are separate. An example of when this happens is listening to instruments. Two instruments may have the same pitch and loudness but the listener can detect the difference between the instruments. Auditory stream segregation is dividing a stream of sounds into separate events based on our perceptions. I think these three concepts are helpful in understanding sensation and perception because they are all important in understanding sound and how we hear things and sound is an important part of the whole topic of sensation and perception.
A couple of things that relate to other chapters are the Gestalt principles. Chapter 10 uses some of these principles in relation to sound rather than vision like described earlier. An example is similarity. This chapter explains that sounds that are similar tend to be grouped together. Another one is common fate. This is when you group sounds together that have common onsets. A third one is good continuation. This occurs when listening to a sound and failing to detect a small interruption because you automatically hear the longer, continuous sound.
Terms: Gestalt principles, similarity, common fate, onset, good continuation, timbre, auditory stream segregation, azimuth, pitch, loudness, lateral superior olive, medial superior olive, ILD, ITD, perceptual restoration effects, pure tones, noise
Chapter 11 forces on the hearing and making sound within music and speech; our complex communication as humans.
Music is a powerful. It has a strong effect on mood and emotion. When we respond to music, it is as a whole like melodies. The book breaks down the elements of music. There is difference in the pitch, octave, and tone of a single note, and furthermore there are different complex sounds called chords. Chords are produced when three or more notes are played at the same time.
There are differences among cultures among single notes and complex sounds. There are differences in scales, and the use and structure of octaves. However there are a lot of universal elements in music. An infant was found to be able to respond to both errors in music across cultures, however adults were found to be “reliably better” at detecting deviations from western scales-interesting. I think this finding has interesting implications.
Notes and chords can combine to form a melody, or “an arrangement of notes and chords in succession”. Within a melody, the pitch as well as the duration of notes and chords can vary. The duration of the notes and chords is the tempo. Lastly there is rhythm. What I though was most interesting about rhythm is that it seems we are naturally draw to certain rhythms. I think-as the book points out- it is interesting that we are draw to different rhythms or we naturally put rhytms together. We all pick out the rhytms of going over bumps in the car.
The book also talks about speech. There are three basic components that make our speech: our lungs, vocal cords and articulation through our vocal tract. To initiate a speech sound, the diaphragm pushes air out of the lungs, to the trachea, to the larynx. The air must pass though the vocal folds at the larynx. Interesting men have lower voices because testosterone during puberty increases the mass of the folds. Above our larynx, the oral tract and nasal tract create our vocal tract. This is where our articulation is manipulated.
The chapter then turns to how we perceive and understand speech. The book states that we/ animals can tell the differences between some words; however true comprehend comes with experience. The book talks about the way we look words; by putting together letter/sounds together to create meaning, and then putting the words together to produce speech. What I thought was interesting was that the book compared music to speech. Unlike music, in our native speech words tend to stand out. We grab words to draw meaning. However listening to a foreign can be more like music; that you heard it more as a whole.
I found the topic of music very interesting because music has played a big part in my life during high school and college. As the book states, music has been and is a big part of different cultures and a big part of life. It can affect people's moods and emotions. It can have deep psychological effects as well. High levels of serotonin (a neurotransmitter) are responsible for negative aspects of mood and emotion. Serotonin levels actually rise when a person listens to disagreeable music. When people listen to pleasurable music, they can experience changes in muscle electrical activity, heart rate, and respiration. Also, blood flow increases in the brain regions that are involved with motivation. One of the most important characteristics of acoustic signal is frequency. The psychological quality of perceived frequency is pitch.
The next topic I found interesting is related, but it is tone height and tone chroma. The sounds that comprise melodies are called musical pitch. An octave is the interval between two sound frequencies with the ratio of 2:1. One thing I found interesting with an octave is that middle C has a fundamental frequency of 261.6 Hz. Therefore, the octave above and below this would have a fundamental frequency of 523.2 and 130.8 Hz. Musical pitch is defined as having two dimensions. Tone height is a sound quality corresponding to the level of pitch and is monotonically related to frequency. Tone chroma is a sound quality shared by tones that have the same octave interval. All sounds along the vertical line of the helix used to demonstrate musical pitch have the same tone chroma. So all As, Bs, Cs, etc, share the same tone chroma. Frequency and tone height increase with increasing height on the helix, however. A chord, then, is a combination of three or more musical notes with different pitches played simultaneously, which everyone would know if they have ever played music. There are also other terms that go along with music such as melody, rhythm, and tempo. Melody is defined as an arrangement of nots or chords in succession. Rhythm can be changed and altered to change songs. Rhythm can also be found in things such as walking and galloping. Tempo, however, is the perceived speed of the presentation of sounds. Again, if a person had been involved in band or music in school, this would come easy. There are different cultures with music as well. People in different cultures perceive music differently, and therefore musical scales and intervals vary across cultures. Different cultures use different amounts of notes between notes within an octave.
One topic I found to be more uninteresting was speech production. Humans are capable of creating distinct speech sounds. In fact, 5000 or more languages use over 850 different speech sounds. This is because of the flexibility of the vocal tract, which is the airway above the larynx used for the production of speech. This includes the oral tract and nasal tract. One thing that I actually did find interesting in this topic is that we cannot swallow and breathe at the same time. Therefore, we have a greater chance of choking.
This topic continued into the topic of speech production, which I also found kind of uninteresting, or at least not as interesting as I found music. Production of speech is made up of three components; respiration (lungs), phonation (vocal cords), and articulation (vocal tract). With respiration and phonation, the diaphragm pushes air out of our lungs up through the trachea, and then up to the larynx. THen, the air must past through the two vocal folds (which are made of muscle tissue that can be adjusted to vary how the air goes through the opening). The adjustments of these muscles are types of phonation. Articulation, however, is described as the act or manner of producing a speech sound using the vocal tract. We can change the shape of the vocal tract by changing the jaw, lips, tongue, and other things. This is actually what's referred to as articulation.
These relate to other chapters because with music it is all about how a person perceives this music. We have discussed before how there are many different perceptions with different people. I also think the topic of music will be the most helpful to us because music is a big part of life today. On the radio, tv, and computer there are always things about music and things like that. That's why I think that's the most interesting and helpful topic. I also think the topics about listening are also very helpful to us because obviously as college students, listening in class is a big part of learning.
Terms: pitch, octave, tone height, tone chroma, chords, melody, tempo, rhythm, vocal tract, respiration, phonation, articulation
The part of the chapter I found most interesting in the chapter was the part on Speech production. Speech is comprised of respiration by the lungs, phonation by the vocal cords and articulation by the vocal tract. Speaking requires all three components to work together effectively. The sound when speaking comes from when the diaphragm pushes air out of the lungs through the trachea and to the larynx where it must pass through the vocal cords. The vocal tract is the oral and nasal tract combined. By manipulating your jaw, tongue, lips, tongue tip, soft palate you can change the shape of your vocal cords which is called articulation. Vowel sounds are made with an open vocal tract while consonants are made by obstructing the vocal tract.
The part I found least interesting was the part about speech perception. When we speak we can speak very fast and say about 10-15 consonants and vowels a second. Coarticulation is the overlap of space and time; therefore it is when the making of one speech sound overlaps the sound of another speech sound. I liked this part least because it was more difficult to understand for myself than other parts of the chapter.
One of the things in the chapter that I think will be useful in understanding sensation and perception is the perception of music. Our brains are organized to correspond to frequency for processing sounds. The psychological quality of perceived frequency is pitch. To understand pitch it is essential to understand octave. An octave is the interval between two sounds with a 2:1 ratio. Musical pitch is described as having two parts known as tone height and tone chroma. Tone height is the frequency of the sound while tone chroma is a sound shared with tones that have the same octave. There are also chords which are three different notes played together with different pitches. In music it is also essential to have a tempo. The tempo is the speed the sounds are being presented in.
The second subject that I thought would be useful in understanding sensation and perception is the part about learning to listen. I found it interesting that babies are gaining considerable amounts of experience for speech in utero. Another interesting finding is that newborn preferred the sound of their mother’s reading lullabies that were read to them in the third trimester. Studies are suggesting that infants learn language by interpreting speech sounds that occur together. For example infants will pick up on words that occur together and often versus words that aren’t heard as often, which makes a lot of sense. Infants will be able to differentiate between two separate words when the particular word is not heard together as often.
Lastly, learning about speech in the brain can be very helpful when beginning to understand sensation and perception. Studies are suggesting that when we are learning to speak, areas of the right and left temporal lobes are activated in response to speech and not in as much in response to noise. An interesting finding the author brought up was that when lip-reading an individual mouthing numbers 1-10 the brain was activated in the superior temporal lobe which is similar what is seen when individuals are hearing speech. These concepts are related to other chapter in that like seeing, experience is also very important for hearing.
Terms: vocal tract, articulation, coarticulation, pitch, octave, tone height, tone chroma.
I found the section on Musical Notes interesting. In an earlier chapter we learned about sound frequency and that a person doesn't hear frequency, they hear pitch. When discussing pitch it is also important to understand octave. In music class we learned about octaves being the same note only higher on the scale. (middle C, high C) This is still true, however the chapter explains it more in depth. A note that is double in frequency to another it is one octave higher. Notes that are exactly an octave apart sound more similar than notes that have closer frequency. Musical pitch is usually described as having 2 dimensions. One is tone height which is "a sound quality corresponding to the level of pitch". The other is tone chroma which is "a sound quality shared by tones that have the same octave intervals. Music is much more complex that single notes. More complex sounds called chords are created when you play 3 or more notes at the same time. There are 2 kinds of chords. Consonant chords are combinations of notes who's ratios between note frequencies are simple whereas dissonant intervals "defined by less element ratios". Consonant chords are generally more pleasing to listen to. Once you have single notes and chords you can create a melody, which is just a sequence of sounds that is perceived as a coherent structure. The important part of the melody is not the specific notes themselves but the pattern of increasing and decreasing pitches and the note duration. As long as these stay the same the melody will be recognizable. Another thing that can change in a melody is the tempo which is the speed in which it is played.
Nothing in the chapter was really uninteresting to me. I found the section on music very interesting and the section on speech informative. The section on speech was very useful information for me in understanding sensation and perception. Humans can produce over 850 different speech sounds thanks to the structure of the human vocal tract which is the "airway above the larynx used for the production of speech". Speech production has 3 components. The first one is respiration. To start making a speech sound the diaphragm must push air out of the lungs, through the trachea and then up to the larynx. The larynx has 2 vocal folds which are made of muscle tissue and can be adjusted determine how freely air passes through them. The adjustments are described as types of phonation which is the second component. Vocal folds differ in stiffness and mass depending on the person. People who have larger vocal folds have lower voices. People can also self-modify how their vocal folds vibrate which is how people can change their voice. The third component is articulation. Articulation refers to the ability to change the shape of the vocal tract which can be done by manipulating the jaw, lips, tongue body and tip, velum and the rest of the vocal tract structures. Each speech sound is made by manipulating this vocal tract, vowel sounds are made with a relatively open vocal tract with variations in where the tongue is placed. Consonants are made by obstructing the vocal tract and each sound can be classified according to 3 articulation dimensions; place of articulation, manner of articulation, and voicing. Humans are very skilled at creating these sounds and on average create 10-15 vowel and consonant sounds per second.
Hearing any sounds activates the primary auditory cortex to process complex sounds additional areas of the cortex must be activated. These areas of the auditory cortex are known as the belt and parabelt regions. Both these areas are activated when someone hears speech and music. When hearing complex sounds activity is relatively balanced across the hemispheres. Areas of both the left and right superior temporal lobes activate more strongly in response to speech. Again this tends to be balanced across hemispheres. Eventually, processing of speech will become more lateralized to one temporal lobe "because perceiving speech is part of understanding language". There is currently a lot of research studying how speech is processed. Currently, evidence shows that when sounds start becoming more complex they are processed in "more anterior and ventral regions of superior temporal cortex farther away from A1. When speech sounds become more clearly a part of language, they are processed farther forward in the left temporal lobe in areas that are more anterior and more ventral too."
terms:
frequency, pitch, octave, tone height, tone chroma, tones, chords, consonant chords, dissonant chords, melody, vocal tract, larynx, respiration, vocal folds, phonation, articulation, primary auditory cortex, belt, parabelt, superior temporal lobe, anterior & ventral regions
What I found most interesting in this chapter was the topics about music. I was in chior the majority of my life and found it interesting to have the topics that i have know about for years to be broken done and explained scientifically. One of the main things about music is pitch. Along with pitch, octaves are also important. The nearer the sound is in frequencies the nearer it will be in pitch. pitch is broken done into two dimensions, tone height and tone chroma. Tone height is a sound quality corresponding to the level of pitch. It is also monotonically related to frequency. Tone chroma is a sound quality shared by tones that have the same octave interval. Chords are a cominbination of three or more musical notes with different pitches play simultaneously. This can be broken down into consonant and dissonant chords. I also found it interesting that different cultures have different scales and different notes used. I never thought that people didnt have a universal scale.
The topic i found lease interesting was how speech was classifying speech sounds. It just talked about where the toungue is places and how our mouth is shaped and i didnt find it as interesting as the rest of the chapter. Speech is made up of three main componenets: respiration, phonation and articulation. I did find it interesting that over the world we use 850 speech sounds. I felt like that is incredible and you couldnt even think of how many you use in one conversation. This relates to the vocal tract which is the airway above the larynx used for the production of speech. The vocal tract includes the oral and nasal tract.
I thought that the topic of music really helped to understand hearing in general and how pitch is such a big deal when it comes to sounds. Also i thought that becoming a native speaker could be important. We learn from birth what sounds are important and how to use the sounds to form words. It would be hard to perceive sounds differently in another country and to learn their language when it is the opposite of what we have been taught most of our lives. I also thought that learning words would help because we have to learn thousands of strings of meaningless letters and make it meaningful in our environment. This is how we can read paragraphs of scrambled words at a time and know exactly what it is saying. As long as the first and last letter are the same we dont even need the letter to make sense inbetween. Our brains are amazing that they can associate words and letters without the grey area at a fast rate.
pitch, octave, tone heigh, tone chroma, chord, consonant chord, dissonant chord, vocal tract
What I found most interesting was speech. Most people who listen to speech also produce speech. Talkers speak so that they can be understood, and the relationship between production and perception of speech is an especially intimate one. The airway above the larynx used for the production of speech. The vocal tract includes the oral tract and nasal tract. The production of speech has three basic components: respiration (lungs), phonation (vocal chords) and articulation (vocal tract). Speaking fluently requires an impressive degree of coordination among these components. To initiate a speech sound, the diaphragm pushes air out of the lungs, through the trachea, and up to the larynx. At the larynx, air passes through the two vocal folds, which are made up of muscle tissue that can adjusted to vary how freely air passes through the opening between them. These adjustments are described as types of phonation. The rate as which vocal folds vibrate depends on their stiffness and mass. The first harmonic corresponds to the actual rate of physical vibration of the vocal folds, the fundamental frequency. Talkers can make interesting modifications in the way their vocal folds vibrate—creating breathy or creaky voices. When it comes to articulation the area above the larynx—the oral tract and nasal tract combined—is referred to as the vocal tract. Humans have an unrivaled ability to change the shape of the vocal tract by manipulating the jaw, lips, tongue body, tongue tip, and velum (soft palate) and other vocal-tract structures. These manipulations are referred to as articulation (the act or manner of producing a speed sound using the vocal tract. Peaks in the speech spectrum are referred to as formants and formants are labeled by number, from lowest frequency to highest. These concentrations in energy occur at different frequencies depending on the length of the vocal tract. For shorter vocal tracts, formants are at higher frequencies than for longer vocal tracts. Because absolute frequencies change depending on who’s talking, listeners must use the relationships between formant peaks to perceive speech sounds. Moreover, classifying speech sounds in described in terms of articulation. Vowel sounds are all made with a relatively open vocal tract, and they vary mostly in how high or low and how far forward or back the tongue is placed in the oral tract, along with whether or not the loops are rounded. We produce consonants by obstructing the vocal tract in some way, and each consonant sound can be classified according to three articulatory dimensions: 1) Place of articulation which is the airflow can be obstructed at the lips, at the alveolar ridge just behind the teeth. 2) Manner of articulation where airflow can be totally obstructed, partially or slightly obstructed, first blocked, then allowed to sneak through, and blocked at first from going through the mouth, but allowed to go through the nasal passage. 3) Voicing whether the vocal cords are vibrating or are not vibrating. When it comes to speech production it is very fast. We produce about ten to fifteen consonants and vowels per second, and if we are in a hurry, we can as much as double this rate. The overlap of articulation in space and time is called coarticulation. When it comes to categorical perception, incremental changes to simple acoustic stimuli such as pure tones leads to gradual changes in people’s perception of these stimuli. For example, tones would sound just a little higher in pitch with each small step in frequency. Listeners appeared incapable of hearing that anything was different when two sounds were labeled as the same consonant (i.e., bah, dah, gah). There are three qualities that define categorical perception. The first two include a sharp labeling (identification) function and discontinuous decimation performance. The third definitional quality of categorical perception follows from the first two: researchers can predict discrimination performance on the basis of labeling data. In short, listeners report hearing differences between sounds only when those difference would change the label of the sound, so the ability to discriminate sounds can be predicted by how listeners label the sounds. Moreover the fact that there are no acoustic invariants for distinguishing speech sounds in really no different from many comparable situations in visual perception. We don’t need individual acoustic invariants to distinguish speech sounds; we just need to be as good at pattern recognition for sounds as we are for visual images. And one of the things that the billions of neurons in the brain do best is integrating multiple sources of information to recognize patterns. Experience is incredibly important for visual perception, particularly the fighter-level perception of objects and events in the world. Experience is every bit as important for auditory perception.
What I found least interesting was that of music. Psychological quality of perceived frequency is pitch. The sound of music extends across a frequency range from about 24 to 4200 Hz. In addition, a very important concept in understanding musical pitch is the octave which is the interval between two sound frequencies having a ratio of 2:1. When one of the two periodic sounds is double the frequency of the other, those two sounds are one octave apart. Because of these octave relations, musical pitch is typically described as having two dimensions. The first is tone height, which relates to frequency in a fairly straightforward way. The second dimension, related to the octave, is tone chroma. Tone height is a sound quality corresponding to the level of pitch. Tone height is monotonically related to frequency. Tone chroma is shared by tones that have the same octave interval. Furthermore, music is further defined by richer complex sounds called chords which are created when three or more notes are played simultaneously. The major distinction between the chords is whether they are consonant or dissonant. Consonant chords are combinations of notes in which ratios between the note frequencies are simple. When it comes to making music notes or chords can forma melody, a sequence of sounds perceived as a single coherent structure. A melody is defined by its contour—pattern of rises and declines in pitch—rather than by an exact sequence of sound frequencies. When there’s a shift every notes of a melody by one octave and the resulting melody is the same. In addition to varying in pitch, notes and chords vary in duration. The average duration of a set of notes in a melody defines the music’s tempo, which is the perceived speed of the presentation of sounds. Any melody can be played at either a fast or a slow tempo. But the relative durations within a sequence of notes are a critical part of the melodies themselves. If the notes of a given sequence are played with different durations, we will hear a completely different melody. Music always varies in rhythm. When two different rhythms are overlapped, their rhythms can collide in interesting ways. Melody is also essentially a psychological entity. There is nothing about the particular sequence of notes in “Twinkle, Twinkle, Little Star” that makes them a melody. Rather it’s our experience with a particular sequence of notes or with similar sequences that helps us perceive coherence.
Topics from earlier chapters that related include, perception, the visual system, and how we hear things
Key Terms: pitch, octave, tone height, tone height, tone chroma, chord, melody, tempo, vocal tract, articulation, formant, formant, coarticulation,
Opps...I read the wrong chapter for this week, so here is the blog for the correct chapter.
After reading chapter 11, I found the topic of music very interesting. I have always been fascinated with music. I love singing and playing musical instruments and therefore found this section of the chapter very interesting. While I was still in high school I considered going to school to become a music therapist, however I realized this occupation is rare and hard to obtain so I settled with something else. The text briefly discusses music therapy stating that since music has powerful effects on mood and emotion, music therapy can be used by clinical psychologist, involving having clients sing, listen play, and move to music in efforts to improve mental and physical health.
The main component of music is the notes, and each note represents a pitch. The text defines pitch as the psychological aspect of sound related mainly to the fundamental frequency. An important concept when understanding pitch is the octave. And octave is the interval between two sound frequencies having a ratio of 2:1. Musical pitch is typically described as having two dimensions. The first is tone height and the second is tone chroma. Tone height is a sound quality whereby a sound is heard to be higher or lower pitch. Tone height is monotonically related to frequency. Whereas tone chroma is a sound quality shared by tones that have the same octave interval. The book mentions that it is a good idea to visualize musical pitch as a helix. Frequency and tone height increase with increasing height on the helix. the circular laps around the helix correspond to changes in tone chroma. at the same point along each lap around the helix, a sound will be on a vertical line, and all sounds along that line share the same tone chroma and are separated by octaves.
Other concepts that are important when studying music are the concepts of chords, melody and rhythm. Chords are what makes music interesting, in my point of view. Chords are created when three or more notes are played simultaneously. The major distinction between chords is whether they are consonant or dissonant. Consonant chords are combination's of notes in which the ratios between the note frequencies are simple. and are perceived by the listener to be the most pleasing. Dissonant chords are defined by less elegant ratios and are not very pleasing sounding. They were once referred to as the devil in music. Melodies are what makes a musical piece memorable. It is defined in the test of an arrangement of notes or chords in succession. Tempo on the other hand is the perceived speed of the presentation of sounds. music can also vary in its rhythm.
I didn't really find anything uninteresting in this chapter. There were a few topics that weren't as interesting as others but I enjoyed the majority of the chapter. Another section that I found interesting was the section that covered speech. According to the text, there are three main components to the production of speech: respiration (lungs), phonation (vocal cords), and articulation (vocal tract). The vocal tract is the airway above the larynx used for production of speech. It includes the oral tract and the nasal tract. the book does a great job at describing how a speech sound happens. "The initiate a speech sounds, the diaphragm pushes air out of the lungs, through the trachea, and up to the larynx. At the larynx, air must pass through the two vocal folds, which are made up of muscle tissue that can be adjusted to vary how freely air passes through the opening between them. These adjustments are described as types of phonation." the text also mentions that the rate at which the vocal folds vibrate depends on their stiffness and mass. Articulation occurs in an area about the larynx, the vocal tract. We can change the shape of the vocal tract by manipulation the jaw, lips, tongue, body, tongue tip, velm and other vocal-tract structures.
One thing discussed in this chapter that relates to previous chapters is the importance of contrast when speaking, listening to someone speak or listening to music. Contrast helps us determine words from one another and hear different sound patterns. This is similar to previous chapters because we use contrast when looking at things (ie figure ground).
Terms: music therapy, pitch, octave, tone height, tone chroma, chords, melody, rhythm, consonant, dissonant, vocal tract, phonation, articulation, respiration,
http://www.walesonline.co.uk/news/wales-news/2011/04/13/arts-therapist-at-royal-welsh-college-of-music-and-drama-devastated-the-lives-of-her-students-a-professional-hearing-is-told-91466-28510322/
This article is has a very interesting view on Music therapy. It is worth reading. I found it odd and made me think about how music could be negative.
In chapter 11 I found it interesting that they date the emergence of music to the Shang dynasty of China in 1600 BC. A lot of research in music perception (that im learning about in my soundscapes class) points to an almost inherent musical tendency in humans. Music excited a large portion of our brain with several different functions, even as basic as parts of the brain stem. This points to a very early emergence of music in humans, I would say it evolved much earlier than ancient China. Every culture we know of in the world, no matter how far removed from colonization and westernization, developed a form of music. We most likely just havent found concrete evidence of something that doesn't leave an imprint.
Another interesting part of this chapter was the musical staf laid out. All 7(ish) octaves of the piano are laid out with the audibility curve laid out underneath. It shows that at extremely low and high frequencies the sound pressure must be much louder for us to perceive it. I also found it interesting that that transposed the range of different instruments on the range.
The chapter then delves into the anatomical structures that we use to create sound and speech. It also covers the McGurk Effect that we talked about in class. It explains that if a person sees a videotape of a person saying 'gah' but hears an audio track of a person saying 'bah' they will hear the word 'dah'. This makes me wonder how important vowels are to our perception of speech. German an other eastern european languages are very consonant heavy with long stretches of letters between vowels, but American English is heavy with vowels. I wonder what the difference is between each cultures perception and emphasis on vowels.
Of the various aspects of Sensation & Perception presented in this chapter I found the fact that speech sounds are processed in both hemispheres of the brain much like other complex sounds, until they become part of the linguistic message the most interesting. Your speech is then further processed in the anterior and ventral regions, mostly in the left superior temporal cortex. How the brain understands speech and language is one of the oldest and most exciting questions in psychology. Two of the most important early observations of cortical functions related to speech were made by surgeon Pierre Paul Broca and neurologist Carl Wernicke in the nineteenth century. Wernicke reported that damage to the posterior superior left temporal lobe resulted in a complementary disorder. Patients with damage to “Wernicke’s area” suffered from receptive aphasia. Receptive aphasia is the difficulty to comprehend language but with a retained ability to produce relatively fluent, but meaningless, speech. Wernicke’s area is located just behind the auditory cortex, leading Wernicke to suggest that this area of the brain is responsible for decoding the meaning of perceived speech sounds. Broca’s area, which is closer to the motor cortex, was presumed to be responsible for going the other direction—converting ideas into spoken words. Thus, an intact Broca’s area along with a damaged Wernicke’s area results in fluent production but attenuated perception, while an intact Wernicke’s with a damaged Broca’s leads to unimpaired perception but failed production. There is also a connection between Broca’s and Wernicke’s areas, called the arcuate fasciculus, and a severance of this connection leads to a third condition, conduction aphasia, marked by retained ability to comprehend language and produce spontaneous speech , but an inability to repeat sentences that the patient has heard. This type of aphasia was actually predicted by Wernicke before the arcuate fasciculus was discovered. It also is very difficult to untangle speech perception from perception of other types of complex sounds. For example, we might try to distinguish processing of speech by comparing it to nonspeech that has similar complexity without being speech. This turns out to be very hard to do. First, because languages use such a wide variety of sounds, it is nearly impossible to construct a complex sound that does not have acoustic features in common with any of the 850 different speech sounds used by languages. Second, as we learned earlier, listeners can understand speech that has been very degraded by filtering or other methods. Many complex “nonspeech” sounds can be heard as speech if listeners try hard enough. All this is important to speech production.
I found little blurb about the frequency spectrum graphs of static sounds to be least interesting. This graph or display is called a spectrogram. A spectrogram is a pattern of sound analysis that provides a three-dimensional display plotting time on the horizontal axis, frequency on the vertical axis, and intensity on a color or gray scale.
The three things I read about in chapter 11 that I think will be the most useful for me to in understand sensation and perception are speech pitch, production, and speech in the brain. Tone is the psychological aspect of sound related mainly to the fundamental frequency. Fundamental is the key word that caught my eye. It is the basis of hearing and speaking…or what you start with. We soon learn to produce speech and other languages but it all starts in the brain.
Chapter 9 is about hearing and without the ability to hear you would not be able to form speech. Without frequency which is the number of times per second that a pattern of pressure change repeats and amplitude which is the magnitude of displacement of a sound pressure wave. We would not hear which results in no speech.
TERMS: anterior region, ventral region, superior temporal cortex, cortical functions , Pierre Paul Broca , Carl Wernicke, posterior superior left temporal lobe, receptive aphasia, complementary disorder, auditory cortex, attenuated perception, unimpaired perception , arcuate fasciculus, conduction aphasia, spectrogram, pitch, tone, speech
Chapter 11 talks about the relationship between music and speech perception. Sounds from musical instruments and human vocal tracts obey the same laws of physical acoustics as all other sounds. Spoken words and musical notes are simply complex sounds. Music and speech are created with perceivers in mind. Music and speech both serve to communicate, and both can convey emotion and deeper meanings. The job of a song for example is to move the listener. To me this chapter was a very interesting one. This chapter made me realize how important music is to culture and personal culture identity. Listening to music can effect peoples moods and emotions, this is so true. Because of this, some psychologist use musical therapy on people, which I found to be very interesting. I thought the section in the book that talked about how when people listen to music that they do not like their saritonin rises and effects the way they feel was very interesting. Music and the way it is learned is very similar to speech and the way it is learned.
Next, I want to talk about speech. The book points out that fact that most people who listen to speech also produce speech. It is important to know both speech production and speech perception. Humans are capable of making many different sounds just as musicians can make many different sounds. Where sounds are made are in the vocal tract. This is the airway above the larynx used for production of speech. This includes the oral tract and nasal tract. Next, the book talks about the production of speech. The production of speech has three basic components: Respiration (lungs), phonation (vocal cords), and articulation (vocal tract). The rate at which vocal folds vibrate depends on their stiffness and mass. The book uses the analogy of guitar strings. When you tune guitar strings the more tension (tighter) the stiffer it will be. This will increase the vibration which create a sound with a higher pitch. The sounds of guitar strings also have depend on the thickness and mass. This ties back to speech in that children with smaller vocal cords have high voices. Thinner guitar strings make high pitch sounds. The reaons men have deeper voices than women is because mens vocal cords are thicker than womans vocal cords. Articualton is the act or manner of producing a speech using the vocal tract. The spectrum of sound coming from the vocal folds is a harmonic spectrum. I found the spectrogram to be very interesting. The spectrogram is a pattern for sound analysis to provide a three-dimensional display plotting time on the horizontal axis, frequency on the vertical axis, and intensity on a color or gray scale.
Next, the chapter talks about speech perception. When talking about speech perception you must talk about coarticulation. Coarticulation is the phenomenon in speech whereby attributes of successive speech units overlap in articulatory or acoustic patterns. Listeners discriminate speech sounds only as well as they can label them. One of they way infants learn words is to use their experience with the coourrence of speech sounds.