Topical Blog Week 11 (Due Wednesday)

| 18 Comments

What we would like you to do is to find a topic from this week's chapter that you were interested in and search the internet for material on that topic. You might, for example, find people who are doing research on the topic, you might find web pages that discuss the topic, you might find a video clip that demonstrates something related to the topic, etc. What you find and use is pretty much up to you at this point. Please be sure to use at least 3 quality resources. If you use videos, please limit it to one video.

Once you have completed your search and explorations we would like you to:
1a) State what your topic is.
1b) Discuss how the topic relates to the chapter.
1c) Discuss why you are interested in it.

2) Next, we would like you to take the information you read or viewed related to your topic, integrate/synthesize it, and then write about the topic in a knowledgeable manner. By integrating/synthesizing we mean taking what your read/experienced from the internet search organize the information into the main themes, issues, info, examples, etc. about your topic and then write about the topic in your own words using the information you have about the topic.

3) List the terms you used from the text and from your reference websites.

4) At the end of your post, please include working URLs for the three websites. For each URL you have listed indicate why you chose the site and the extent to which it contributed to your post.


18 Comments

The topic I found really interesting was auditory localization cues, specifically interaural time difference (ITD) and interaural level difference (ILD). When I read the text I thought these topics were fascinating but they are so complex that reading alone was not enough to get a really good picture of what this all entailed. The sites I found were helpful with video and interactive images as well as some additional information.

Auditory localization cues are what aid both humans and animals in being able to know from which direction the sound originates. Our brains uses very subtle differences in timing cues and intensity that allow us to accurately determine the location of a sound.

As we learned in Chapters 9 & 10 sound is the product of vibrations traveling through a medium like air or water. The sound waves travel and bounce of the pinna and concha of the outer part of the ear and enter into the ear canal. They then vibrate the eardrum which causes the three bones in the middle ear that we talked about in class last week to vibrate which sends it through to the cochlea and there it changes into a chemical signal by the hair cells which travel through the cochlear nerve into the brain.
It takes about a half a second for sound to travel from one ear to the next so the ability to localize as well as we do is pretty remarkable.
Interaural time difference deals with the ability to distinguish location based on the arrival of the sound between the two ears. This is measured by the sound’s angle in relation to the head, or azimuth. Zero degrees is straight ahead and 90 degrees to the right and 180 degrees is directly behind. In the video I observed the demonstrator used a sound device and circled it around the “head” which was equipped with sensors to measure the sound. The printout of the sound showed how the ears had opposite peaks and valleys based on the location of the sound device. When the device passed by the front and back those waves crossed.

Interaural level difference focuses on the intensity of the sound. The text tells us that ILD is strongest at 90 and -90 degrees. Basically on opposite sides. The ear that hears the sound the loudest give us an idea of where the sound originates. ILD is similar to ITD the difference is that the lower frequencies are not blocked as much as the higher frequency levels. This means that ILD works the best for higher frequencies in determining location of the sound. This is why it can be more difficult to determine which direction a car with a loud bass is coming from but much easier when hearing a screaming child.

The really interesting part of ILD and ITD I found was in a YouTube video. In the video half a thousandth of a second of silence was inserted to one side of audio track playing music. The result was that the sound seemed as though it were only in the left ear. Even though after a very small period of time the sound was again in stereo. The most interesting part was that if the sound in the right side was at a greater volume but had the .0005second of silence the left earphone still seemed to be where the sound came from. I listened to the clips several times. At one point I removed the earphone from my left ear and put it back but the difference was still there. That very small time mad a lasting difference.

Animals also use audio location cues, and are more adept than humans. It is obviously evolutionary to survival to have this skill. Animals that lack the skill are likely to fall victim more easily to predators. The skill is also used by animals looking for prey. In watching our own pets it is obvious that this skill is very fine tuned. Our dogs will go directly to the window closest to where a person is making a noise. If you knock on our front door they will go to that door, they don’t wander between the three doors guessing which door the knocker is located.
Although this seems so obvious, it is really fascinating how these cues work. The timing is so very miniscule but yet the difference is so very pronounced. I tested this with my husband several times until he got irritated and asked me to stop. Each time he knew exactly where the sound came from. I also had him listen to the sounds and he too took the earphone out because he said it almost seemed like the side that had the delay had quit working. I really think for this subject the various videos and interactive images made the subject a lot clearer than simply reading it in the text. Which is another interesting subject of perception, on how different things are processed and how they aid one another in creating clarity.

Terms: auditory localization cues, ITD, ILD, vibrations, pinna, concha, eardrum. cochlea, perception, clarity


https://www.youtube.com/watch?v=CuYNFv2Oc08
This is a video that showed how time affects sound and where the location seems to be coming from
http://www.cogsci.ucsd.edu/~sereno/201/readings/06.07-OwlLowFreq.pdf
This is an article about how the sound waves work in animals
http://www.urmc.rochester.edu/labs/davis-lab/projects/auditory_processing_of_sound_localization_cues
This site had not a lot of information but a great interactive visual aid
https://www.youtube.com/watch?v=p9YaPXJeZ-8
This is a video about location

1a) State what your topic is.
My topic is Interarual Time Difference, (ITD) which is the difference between a sound arriving at one ear versus the other ear.

1b) Discuss how the topic relates to the chapter.
My topic relates to this chapter in many ways, one way is that it measures sources in varying azimuth from directly in front of the listener, or being far away from an object, in other words, this works well when you talk to people they don't have to yell or scream for you to hear them because they're close to you, but if they are at a far distance, it's harder for you to hear them and you'll probably hear them better with the hear that's closer to their direction, this is where ITD comes in.

1c) Discuss why you are interested in it.
As i stated in a different blog, i am interested in this section because i want to understand it more, i want to know how some people with a healthy brain, great waves and everything else can be deaf in one ear and hear fine in the other, i also want to know how much variation there is in time taken for a sound to reach one ear into the other, another factor is that i want to see what could possible cause one ear goes deaf before the other when you equally use both of your ears. what would be other ways to help estimalate the time it takes a sound to reach one ear, what if the direction of the sound is centered but yet it reaches one ear before the other, how can that be explained?

2) Findings: the findings in this topic aren't that helpful but they aren't as bad either, i think the book itself did well on how this is in general and how to understand this without any confusion, they provided many good examples, like azimuth, which is basically telling us that the angle of where a sound comes from affects which ear it reaches first, in my findings, there is a video where it explains how you can hear a sound and it comes right out the other ear, this goes more in depth of being disinterest the sound or the conversation that you're currently engaged in, also stated that we hear many things when we simply choose to be interested in those particular things, it's easier for our brain to make a respond to something that we're interested in, it gives the brain waves a signal that will help our mind take in the sounds in as soon as they could. another interesting thing that i found out about this topic is that when it comes to one ear being deaf or have less hearing in one ear than the other, if you're right headed, your right ear is likely to be the one that goes deaf first and vise verse for left handed people as well, this happens do to vibrations that we get from outside sources, those vibrations actually go all the way to our auditory system where it only effects that one part of our hearings, these are as such as being on the phone for too long and holding your cellular devices closely to your jaw which takes the signal to your ear and sends all the way to your auditory system.
3)Terms: Interarual Time Difference (ITD), ILD, Azimuth, Auditory System, Hearing Aids, Visual hearing (sign language) Eardrum, Sound Localization.

4) Websites:
This source does a very great job of explaining what ITD is and it has visuals in it.
https://www.youtube.com/watch?v=p9YaPXJeZ-8
This is also a sort of a visual explanation but its more on the labeled diagrams than a video.
http://acousticslab.org/psychoacoustics/PMFiles/Module07a.htm
As a broad explanation and an understanding of what it is with examples.
http://en.wikipedia.org/wiki/Interaural_time_difference

The topic I chose to do more research on was interaural level difference(ILD). This topic was talked about in Chapter 10 relating to hearing in the environment. I’m interested in it because I never knew that the head partially blocks the sound pressure from reaching the opposite ear. So I wanted to expand my learning on this topic.


Interaural level difference is the difference in level intensity between a sound arriving at one ear versus the other. Sounds are more intense at the ear closer to the sound source because the head partially blocks the sound pressure wave from reaching the opposite ear. The properties of the ILD relevant for auditory localization are similar to those of the ITD. Sounds are more intense at the ear that is closer to the source, and less intense at the ear farther away from the source. The ILD is largest at 90 and -90 degrees, and it is nonexistent at 0 degrees (directly in front) and 180 degrees (directly behind).


Between these two extremes, the ILD generally correlates with the angle of the sound source, but because of the irregular shape of the head, the correlation is not quite as precise at it is with ITDs. Although the general relationship between ILD and sound source angle is almost identical to the relationship between ITD and angle, there is an important difference between the two cues: the head blocks high-frequency much more effectively than it does low-frequency sounds. This is because long wavelengths of low-frequency sounds “bend around” the head in much the same way that a large ocean wave crashes over piling near the shore.


Interaural level differences provide salient cues for localizing high frequency sounds in space, and populations of neurons that are sensitive to ILDs are found at almost every synaptic level from brain stem to cortex. These cells get excited by stimulation of one ear and predominantly inhibited by stimulation of the other ear, such that the magnitude of their response is determined in large part by the intensities at both ears. In many cases ILD sensitivity is influenced by overall intensity, which challenges the idea of unambiguous ILD coding.


There is a theory called the Duplex Theory that relates to explaining what interaural level difference is. This theory is Rayleigh’s explanation for the ability of humans to localize sounds by time differences between the sounds reaching the ear and differences in sound level entering the ears(ILDs). It states that ITDs are used to localize low frequency sounds, while ILDs are used in the localization of high frequency sound inputs. The frequency ranges for which the auditory system can use ITDs and ILDs significantly overlap, and most natural sounds will have both high and low frequency components, so that the auditory system in most cases will have to combine information from both ITDs and ILDs to judge the location of a sound source. A consequence of this duplex system is that it’s also possible to generate stimuli on headphones, where ITDs pointing to the left are offset by ILDs pointing to the right. So the sound is perceived as coming from the midline. A limitation of the duplex theory is that the theory doesn’t completely explain directional hearing. There’s no explanation given for the ability to distinguish between a sound source directly in front or behind. The theory also only relates to localising sounds in the horizontal plane around the head, and doesn’t take into account of the pinna used in localisation.


Studies that have looked into hearing loss and interaural time differences found a trend for poor localization and lateralization in people with unilateral of asymmetrical cochlear damage. This is because of the difference in performance between the two ears.


A study that I looked at if LSO neurons can signal small changes in interaural level differences of pure tones based on a discharge rate consistent with psychophysical performance in the discrimination of ILDs. Neural thresholds for ILD discrimination were determined from the discharge rates and associated response variability of single units in response to 300 ms tones in the LSO barbiturate-anesthetized cats using detection theory. Compared with psychophysical date, the best-threshold ILDs of single LSO neurons were comparable with or better than behavior over the full range of frequencies. This means that LSO does play a role in the extraction of ILD.

Terms: Interaural level difference, level intensity, sound, ear, intense, pressure, wave, auditory, localization, nonexistent, extremes, correlation, angle, high-frequency, low-frequency, long wavelengths, brain stem, detection, neurons, sensitive, psychophysical performance, excitatory, inhibitory inputs, left cochlea, right cochlea, contralateral ear, medial nucleus, optics, energy, motion parallax, Duplex Theory, stem, cortex.

URL: http://jn.physiology.org/content/92/1/289


URL: http://en.wikipedia.org/wiki/Interaural_time_difference


URL: http://www.jneurosci.org/content/28/19/4848.full


The reason I chose these websites is because they all went into great detail about interaural level difference. I was still a little confused on the full idea of this concept but I believe these websites helped explain a lot. I also found some good research that helped compare it to other ideas.

1a) This week for my topical blog I decided to look into disorders/disabilities that dealt with the location of sound. I decided to further look into Auditory Processing Disorder.
1b) This topic relates back to the book by looking at complex sounds, the book looks at complex sounds in breaking it down to harmonics within fundamental frequency which is the lowest frequency component of a complex periodic sound. Also looking at timber the psychological sensation by which a listener can judge that two sounds with the same loudness and pitch are dissimilar. It also looked at attack and decay which is the attack (onset of the sound) and Decay (offset of the sound). As all of these parts are very important when it comes to sound the book didn't really mention any disorders along with these functions of our hearing.
1c) Disorders are something that I most often find myself looking further into as there are so many different disorders out there I decided to look on Google for disorders/disabilities that dealt with the location of sound. I found this particularly interesting to me to look up because I know a lot about hearing loss but I had never really stopped to think about the fact that there are other disabilities out there that deal with the ear but in reality may not cause a person to lose their hearing completely. In Fact it may not cause them to lose any of their hearing just they may receive sounds different that what one person may call a “normal” ear. I find this subject interesting as knowing about the location of a sound and where the sound is coming from can be fairly important in knowing/hearing many of the safety signals that we hear in our daily life and I think that it is important for those of us in the psychology field of study to know more about how others can be impacted with different disabilities especially those of us who would like to go into clinical or mental health counseling or even those which most don’t know about like deaf rehabilitation mental health counseling. If we are going to empathize with our patients then I feel that we should have knowledge about many of the different disorders out there and how one person has to go about their lives making these adjustments to understand simple things life a safety sound that the “normal” ear can hear.

2) Auditory processing disorder (APD) also known as Central auditory processing disorder (CAPD) is an overhead term for a variety of disorders that affect the way the brain processes auditory information. APD is the reduced or impaired ability to recognize or comprehend complex sounds. It is a complex problem that affects about 5% to 7% of school-aged children and it is twice as often diagnosed in boys than in girls. APD can affect not only school-aged children but can also affect adults, although the prevalence is currently unknown. APD is not a problem with hearing per se but the problem lies in the hearing process. The American Academy of audiology states that APD is diagnosed by difficulties in one or more auditory processes known to the functions of the central auditory nervous system. APD is often confused with other disorders that can affect a person’s ability to attend, understand, and remember it is important to know that APD is an auditory deficit that is not the result of other higher-order cognitive, language disorders. There are many different disorders that can affect a person’s ability to understand auditory information like ADHD. As there are these other disorders it is not correct to apply the label APD to these individuals even if they have many of the same symptoms and behaviors to those who are diagnosed with APD.

APD individuals often have no evidence of neurological disease and the diagnosis is made on basis of performance on behavioral auditory tests. In APD there is a mismatch between peripheral hearing ability and ability to interpret or decipher sounds. According to Mcfarland APD should be defined as modality-specific perceptual dysfunction that is not due to peripheral hearing loss. APD can only be diagnosed by a audiologist, even though a multidisciplinary team approach may be used to fully critically assess the understanding of the clusters of problems the child or adult may exhibit. Within this approach many different people are used to asses the situation a teacher or an educational diagnostician may give input on the individuals academic difficulties, a psychologist can evaluate the cognitive functioning in many different areas, a speech language pathologist may look further into the individual's written and oral language and speech capabilities. Even though all of these different people are being sought out for information they are NOT the one who is in charge of making the individual’s diagnosis it has to be a audiologist to diagnose APD. The audiologist will administer a series of tests in a sound-treated room, these tests require listeners to attend to many different sounds and respond to them with a repetition motion whether pushing a button or in another way. The audiologist might also use a physiologic responses to sounds can be administered. Audiologists often require these children to be at the age of seven or eight because of the variability in brain function is so marked in younger children that test interpretation may not be possible. As each individual’s auditory processing deficits are different each individual may manifest test in a variety of different ways to diagnosis APD.

There is currently no known definite cause of APD, research does suggest that cogenital (some are born with it) or it can be acquired. Although there is evidence that suggests links back to recurrent middle ear infections, head injury or trauma. As stated before that each individual who suffers from APD may have different symptoms compared to those diagnosed before or after them but some of the main symptoms are; difficulty understanding in noisy environments, difficulty following multi-task directions, language or speech delays, require repetition or clarifications, difficulty with verbal math problems, easily distracted or unusually bothered by loud or sudden noises and many other different symptoms theses are just a few to name. With all of this being said each individual will have to undergo different treatments depending on the person. Treatment of APD generally focuses on three primary areas: changing the learning or communication environment, recruiting higher-order skills, and remediation of the auditory deficit itself.

As there were many different items that were talked about above here are some key points. APD is an auditory disorder that is not the result of higher-order, more global deficit such as autism. Not all learning, language and communication deficits are caused by APD, treatment of APD is highly individualized as there is not just one treatment approach that is appropriate for every individual with APD.


http://en.wikipedia.org/wiki/Auditory_processing_disorder- This webpage broke down the information and gave me the real definitions and background of the APD. It also looked into the different characteristics into further detail.

http://www.asha.org/public/hearing/Understanding-Auditory-Processing-Disorders-in-Children/- This webpage help to develop a better understanding of those children who suffer with APD and how each of them can be affected. It also shows us how we are able to help these children who suffer from APD.
http://www.theapdfoundation.org/- This web was the most reliable as the information was directly form the APD foundation. It was able to give me more direct information on the terminology and definition as it related to many different individuals. It also was able to put into perspective about the information that has came about and the different treatment options even though there is still evidence that shows that each treatment is individualized.

Terms: Auditory Processing Disorder, complex sounds, fundamental frequency, timber, attack and decay, Central auditory processing disorder, peripheral hearing ability, peripheral hearing loss, multidisciplinary team approach.

I choose to research the claim that ear piercings can cause hearing loss. This was briefly discussed in the section discussing directional transfer function and the shape of the pinnae. According to the book piercings can change the shape of the pinnae and potentially cause hearing loss. I choose this topic because I have 7 ear piercings. 4 on one ear and 3 on the other. I have experienced infections, and swelling of my ears and have 3 cartilage piercings and 4 lobe piercings. 3 of my piercings were done at a certified piercing shop while 4 were done at Claire’s. Through my piercing journey I have slowly developed more knowledge about what things are appropriate to put in the ear as well as a growing disdain for ear piercing guns, which I will discuss later.

While investigating this claim I found that it isn’t usually the piercing itself that is causing the hear loss. Rather it is infection that causes the hearing loss. Infections can occur at piercings were sanitary conditions were not present or are not maintained. Infections can occur post-piercing if the piercing isn’t cared for, or if further damage is done (tearing of the flesh around the piercing, blunt trauma etc.) These infections are more likely to occur in piercings that are close to the ear canal such as tragus or conch piercings. For those of you who aren’t familiar with piercing lingo, these piercings lay on the inner ridges of the ear or the flat bowl like structure above the earlobe. These areas contain mostly cartilage, which becomes infected more quickly and is more prone to swelling. It’s the swelling and build up of fluid around the infection that has the potential to cause hearing loss. If a piercing occurs too close to the ear canal and isn’t well taken care of it can cause the flesh of the pinnae to swell and reduce the size of the ear canal, and sometimes change the shape of the pinnae altogether (such as cauliflower ear seen in wrestlers). I have experienced swollen cartilage on my upper ear, but don’t have any of the more internal ear piercings, so I haven’t experienced any permanent damage to my ear. How does one avoid these infections and possible hearing loss? The use of a needle usually composed of steel is more beneficial than the use of a piercing gun. This is especially important for piercings that are not on the ear lobe (it is less damaging to use a piercing gun on the ear lobe due to it’s fleshy nature). It was explained to me by my piercer that using a piercing gun is like shoving a fist through your body, its damaging to a larger area around the piercing. A needle is sharp and slides through the skin, as opposed to punching through it. You should also make sure your piercer is sanitary and has certifications to do piercings. I remember doing piercings on my high school friends, and now knowing that I could have damaged my friends ears to the point of hearing loss, I wish I hadn’t done so, even though they turned out ok. I think the significance of this topic is knowing that whatever you choose to do to your body has the potential to effect the way you are able to perceive things, so you should always be conscious to take care of your body, especially when you are modifying it. Aftercare is really important too. A lot of inexperienced piercers will tell you to use antibiotic creams, or to move the piercing but this actually causes the ears to get irritated and can cause the swelling and infections that you see with some piercings. Again, my piercer talked about the principle of a sword wound, moving the piercing and applying antibiotics would be similar to sticking your hand in your wound and then throwing some acid on it…it doesn’t help. Piercings have been around for a long time, but cartilage piercings are becoming more popular than the traditional lobe piercings, as these fads develop we need to be sure we are educated before we go out to pierce our bodies.

Terms Used: pinnae, directional transfer function, ear, hearing loss, piercings

Sources

http://www.ears4u.net/blog/can-ear-piercings-affect-hearing
I choose to utilize this source because it was written by a health care professional and discussed briefly the history of piercings as well as the potential damage caused by the new fad of cartilage piercings.
http://www.medic8.com/ear-disorders/hearing-loss/ear-piercing.html
this was a medical journal site so it offered again, a lot of good medical backing into what happens with ear piercings and how it may cause hearing loss
http://kidshealth.org/teen/your_body/body_basics/ears.html#
I read this one because I was geared towards teens and I wanted to compare the information to the other medical articles that I read, the information seems to be consistent but medical professionals are constantly telling people to clean the piercings with antibiotics which is NOT helpful
http://www.betterhealth.vic.gov.au/bhcv2/bhcarticles.nsf/pages/piercing
this article talked more about how to choose a piercer and what to look for, I didn’t include as much of this article

1a. The topic from chapter ten that I decided to do further research on is auditory distance perception in human beings. Generally, this addresses the question of how we perceive the distance of sounds and how humans can determine how far away a sound is. Humans rely on relative intensity as a basis to judge how far away a sound is because we innately know that the further away a sound is, the less intense it is.
b. This topic relates to chapter ten because it demonstrates a connection between the auditory system and the environments that surround us. It assesses the ability for humans to determine how far away a sound is and allows them to respond to this sound appropriately in their current environment as well as to react in a way that increases survival chances. It provides more detail about the physiology of hearing and parts of the brain that are important for hearing.
c. I am interested in this topic because I think it is an important aspect of the auditory system and it is crucial to favorable interactions with the environment as well as aids in survival. I am interested in the evolutionary principles that could be explored as well as what the literature finds to be true for perceiving the distance of a sound.
2. An article by Calcagno et al. explored the relationship between auditory distance perception and visual information. Previous research found that the presence of visual information in a given environment effects auditory perception. Furthermore, it is interesting to explore if the inverse would also be a significant finding. While some of the research proved to be inconclusive, the general findings suggest that adult humans have better auditory distance perception when they are also given visual cues to associate the sound with. The researchers also found that the participants were better able to recall the intensity of the noise if it as associated with visual information. This research provides interesting feedback about auditory distance perception because it suggests that humans use extraneous environmental cues to determine where a sound is taking place as well as how far away the sound is occurring. This also has evolutionary implications because it is important for humans to remember locations of sounds if the sound was unpleasant or had the potential to decrease survival.
A second article by Kopco et al. sought to identify neural pathways and associated with perceiving distance in the auditory cortex. Using functional magnetic resonance imaging the researchers explore activation in the auditory cortex after exposure to sounds of varying intensities and after exposure to sounds portraying different distances. Activation in the posterior superior temporal gyrus which holds auditory neurons increased and suggested that this specific area in the brain is sensitive to sound properties relevant to auditory distance perception. I enjoyed reading this research because it gives great information regarding the exact location of auditory perception of depth in the brain. This information is important for determining parts of the brain crucial to normal perception as well as gives information about deficits that may occur if this area of the brain is damaged. I find it interesting to learn about parts of the brain and their functions in our daily lives.
Lastly, an article by Mershon et al explored the idea that perception of auditory distance could be influenced by manipulating naturally occurring noise in the environment. Researchers consistently found that increasing background noise in the environment decreased auditory distance perception among humans. This research article is interesting because it suggests that our ability to perceive distance is not consistent across all scenarios. The addition of extra noise and decreases the ability to accurately assess the distance of a noise and could have evolutionary implications. If a human is unable to detect how far away a sound is they could potentially misinterpret the proximity of a predator and decrease their chances of survival. This research provides solid evidence that distance perception is important for the survival of human beings and can be altered by adjusting background noise. Additionally, I believe this research could be interesting if it were compared to automobile accidents. If an individual is playing loud music (background noise) and operating a vehicle they could be more likely to miss an indicator (honking) of issues ahead. This failure to perceive the distance of the sound (honking) occurring could result in an accident.
I think it would be beneficial for further research to explore evolutionary implications as well as areas of the brain associated with perceiving auditory depth.

Terms: auditory distance perception, visual information, visual cues, intensity, neural pathways, auditory cortex, functional magnetic resonance imaging, posterior superior temporal gyrus, background noise

http://web.a.ebscohost.com.proxy.lib.uni.edu/ehost/detail/detail?vid=6&sid=28ad658c-3e43-43e0-a3ee-8f0b399c597f%40sessionmgr4001&hid=4212&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=psyh&AN=1990-06606-001 I chose to utilize this article because it gave great information about background noise and its effects on auditory distance perception in humans

http://web.a.ebscohost.com.proxy.lib.uni.edu/ehost/detail/detail?vid=3&sid=a6678313-9edc-44b6-8e14-611ac0cf831e%40sessionmgr4001&hid=4212&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=psyh&AN=2012-34108-005 I chose this article because it explored an interesting relationship between visual information and the ability to perceive auditory distance.

http://web.b.ebscohost.com.proxy.lib.uni.edu/ehost/detail/detail?vid=6&sid=b00ad5d8-0553-4fa0-ac5f-ccc6a160cb25%40sessionmgr110&hid=109&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=psyh&AN=2012-18012-002 I chose this article because it explored the area of the brain activated when a human is perceiving auditory distance and I love brains.

1a) My topic for this week is Timbre

1b) It relates to chapter 10 because it is a concept that was brought up in several paragraphs and relates to the sensation of hearing and does a brief overview of what it actual is.

1c) I am interested in this concept because I have discussed it in music classes before and would like to know more about how it further relates to sensation and perception of the environment. I also would like to know if there are difference in hearing timbre between different groups of people or if age would affect it.

2) Timbre is the psychological sensation by which a listener can judge that two sounds with the same loudness and pitch are dissimilar. The timbre quality is measured through harmonics and other high frequencies. In other psychology related fields it is sometimes referred to as tone color or tone quality. It is used often in the musical setting. Timbre is what tells us the difference between a person’s voice singing compared to an instrument even when they are at the same loudness and pitch. The sound of a musical instrument may be described with such words as bright, dark, warm, harsh, and other terms. There are also colors of noise, such as pink and white which we have discussed in the previous chapter of our book. According to the American Standard Association timbre depends primarily upon the spectrum of the stimulus, but it also depends upon the waveform, the sound pressure, the frequency location of the spectrum, and the temporal characteristics of the stimulus.

Many commentators have attempted to decompose timbre into component attributes. They have been able to come up with five main attributes to timbre to help show in better terms what it is and how it is formed. The first being the range between tonal and noise like character of a sound. This in music is the area in which normal sound or voice turns into a musical note. The second is the spectral envelope. The spectral envelope is a curve in the frequency-amplitude plane, derived from a Fourier magnitude spectrum. It describes one point in time. The third attribute is the time envelope also known as ADSR—attack, decay, sustain, release. During the attack portion is the part of a sound in which amplitude increases. Decay is when is decreases. Sustain is the level during the main sequence of the sound's duration, and release is the time taken for the level to decay from the sustain level to zero. The fourth attribute is changes in both of spectral envelope and fundamental frequency. The last is the prefix, also called the onset of a sound, quite dissimilar to the ensuing lasting vibration.

Later research gives a table of subjective experiences and related physical phenomena based on the five attributes of timbre. Tonal character, usually pitched has an objective of periodic sound. Noisy, with or without some tonal character, including rustle noise has an objective of creating noise, including random pulses characterized by the rustle time which is the mean interval between pulses. Coloration’s objective is to make the spectral envelope. Beginning and the ending of the timbre creates physical rise and decay time. Coloration glide or formant glide makes a change of spectral envelope. Micro-intonation causes a small change in frequency. Vibrato which you hear in the voices of opera singers a lot has the objective to create frequency modulation. Tremolo makes amplitude modulation. Attack creates the prefix of the sound, and the final part of the table brings up the final sound which makes up the suffix of said sound.

As stated previously harmonics are one way we measure the quality of timbre. The fullness of a sound or note a musical instrument produces is sometimes described in terms of a sum of a number of distinct frequencies. The lowest frequency is called the fundamental frequency, and the pitch it produces is used to name the note, but the fundamental frequency is not always the dominant frequency. The dominant frequency is the frequency that is most heard, and it is always a multiple of the fundamental frequency. When a tuning note is played in a concert band many different levels of hertz are heard from the different kinds of instruments even though they are playing the same pitch. Each instrument in the orchestra or concert band produces a different combination of these frequencies, as well as harmonics and overtones. The sound waves of the different frequencies overlap and combine, and the balance of these amplitudes is a major factor in the characteristic sound of each instrument. Often, listeners can identify an instrument, even at different pitches and loudness, in different environments, and with different people playing.

Terms: Timbre, frequency, harmonics, attack, decay, resonate, sustain, spectral envelope, fundamental frequency, tone, white noise, coloration, overtones.

file:///C:/Users/Aubri/Downloads/Tervaniemi_timbre1997.pdf This research article goes over different areas of timbre and auditory stimulation that categorizes these sounds.

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002759 This article does an in depth explanation on the biological factors that influence timbre as well as the attributes.

https://www.youtube.com/watch?v=144QVYv__S4 this does a good visual explanation and showing of facts as to how timbre works.

Timbre is interesting so that’s my topic. It’s kind of cool that our auditory system can discern two sounds that have the same loudness and pitch. Timbre also known as tone quality, is the quality of a musical note, sound, or tone that distinguishes different types of sound production, such as voices and musical instruments. The physical characteristics of sound that determine the perception of timbre include spectrum and envelope. Timbre is what makes a particular musical sound different from another, even when they have the same pitch and loudness. For example, a guitar and a piano play the same note at the same loudness. Everything is the same, but you perceive them differently, you are able to differentiate between the two. Timbre is conveyed by harmonics and other high frequencies. Timbre is mainly determined by the harmonic content of a sound and the dynamic characteristics of the sound such as vibrato and the attack-decay envelope of the sound. Some report that it takes about 60 ms to recognize the timbre of a tone, and that any tone shorter than about 4 ms is perceived as an atonal click. The ordinary definition of vibrato is periodic changes in the pitch of the tone, and the term tremolo is used to indicate periodic changes in the amplitude or loudness of the tone. So vibrato could be called frequency modulation and tremolo could be called amplitude modulation of the tone. When it comes to the voice, or the sound of a musical instrument, both are usually present to some extent. Vibrato is considered to be a desirable characteristic of the human voice if it is not excessive. It can be used for expression, and adds richness to the voice. If the harmonic content of a sustained sound from a voice or wind instrument is reproduced precisely, the ear can readily detect the difference in timbre because of the absence of vibrato.

Loudness and pitch are easy to describe because they correspond well to simple acoustic dimensions, which we learned are amplitude and frequency. However, the richness of the complex sounds is dependent upon more than simple sensations of loudness and pitch. A piano and a guitar might play the same note at exactly the same loudness, but a person would have no trouble discerning that two different instruments were being played. The perceptual quality that differs between these two musical instruments, as well as between vowel sounds like in the words hot, heat, and hoot, is referred to as timbre. Differences in timbre between musical instruments or vowel sounds can be estimated closely by comparison of the overall spectra of two sounds overlapping. That means timbre must be involved with the relative energy of spectral components, and perception of timbre depends on the context in which a sound is heard.

The way a complex sound begins, is called the attack of the sound, and ends, called the sound’s decay, is another important quality. Auditory systems are sensitive to attack and decay characteristics. Audible sounds have a natural attack and decay curve, is called the envelop. During attack, the volume of the sound increases, and during decay, the volume decreases. When a sound is reversed, the attack becomes the decay and the decay becomes the attack.

Terms:
timbre, auditory system, loudness, pitch, sound quality, harmonics, frequencies, tone, harmonic content, vibrato, attack, decay, envelop, tremolo, frequency modulation, amplitude modulation, sustained sound, acoustic dimensions, amplitude, complex sound, fundamental frequency, spectra, spectral components

Sources:
http://en.wikipedia.org/wiki/Timbre
This link gives a good idea of what timbre is, definition
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
This link and the one below helped me put the ideas together, I think it’s because of the visuals, it really helps
http://www.mat.ucsb.edu/~b.sturm/MAT201A/presentations/Fri/OhnandPark.pdf
same as above

The topic I choose to do further research on was auditory distances perception. What this does is address the question of how humans perceive the distance on sounds and how we can determine the distance a sound is away from us. Humans already know that the farther away the sound, the less intense we hear it so basically we rely on relative intensity. This topic relates to chapter ten because not only does it demonstrate a connection between the environment and our auditory system system, it goes into even more detail about the physiology of hearing and the parts of the brain that we need to understand and that are important for hearing. Auditory distances perception also assessed the ability for humans to be able to determine how far a sound is away and it allows us to appropriately respond in the environment most suitable and to increase chances of survival.
I am interested in this topic because I have always been curious as to how this concept plays a role in our life. Our auditory system has such a big job without us really understanding the whole idea and this concept is an important aspect of the auditory system. I also think this is important in sensation and perception because it is crucial to favorable interactions with the environment as well as aids in survival. Evolutionary psychology has always sparked my interest and I believe this concept is related to this category.
I found many articles that related to the concept of auditory distance perception and I found these articles to be very interesting. The first article that I found and wanted to talk about was by Kopco and colleagues and this article sought to identify neural pathways and associated with perceiving distance in the auditory cortex. The researchers that were a part of this study explored activation in the auditory cortex after being exposed to sounds of varying intensities and sounds from different distances. The researchers did this by using functional magnetic resonance imaging which I find to be very interesting. Stimulation in the posterior superior temporal gyrus which holds auditory neurons enlarged and proposed that this particular area in the brain is sensitive to sound properties relevant to auditory distance perception. I found this article to be interesting because it gave a lot of great information regarding the location of auditory perception of depth in the brain. This information is important when determining parts of the brain crucial to normal perception as well as it gives a large amount of information about deficits that have the potential of occurring if this area of the brain would become damaged. The brain is a very complex structure so I enjoy reading articles and findings about the brain and the amount of work it does on a daily basis.
Another article that I came across and found to be really interesting that related to my topic was by Calcagno and colleagues and they explored the relationship between auditory distances perception and visual information. One could guess that with the advantage of eyesight, it is easier to spot where the sound is coming from but research also found that the existence of visual information in a particular setting can have an effect on auditory perception. The general findings of this article suggest that adults have better auditory distance perception when they also have the benefit of visual cues to associate the sound with. Also relating to this, researchers found that participants could more easily recall the intensity of the sound if it was also associated with visual information. This is interesting to me because it basically says that us as humans use irrelevant environmental cues to determine the direction of a sound as well as how far away the sound is. Going back to my interest in evolutionary psychology, evolutionary implications have become known because it is essential for humans to spot locations of sound if the sound was displeasing or had a chance to diminish the length of survival.
Although I did not find a lot of basic information on this concept from the chapter and I focused more on articles that I found, I still would enjoy reading more about this in published books because I still have a few unanswered questions.
Terms: Auditory distance perception, sensation, perception, visual information, auditory system, visual system, functional magnetic resonance imaging.
http://web.a.ebscohost.com.proxy.lib.uni.edu/ehost/detail/detail?vid=3&sid=a6678313-9edc-44b6-8e14-611ac0cf831e%40sessionmgr4001&hid=4212&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=psyh&AN=2012-34108-005 I chose this article because it explored an interesting relationship between visual information and the ability to perceive auditory distance as well as answered some of the questions that I had upon first reading about this topic.
http://web.b.ebscohost.com.proxy.lib.uni.edu/ehost/detail/detail?vid=6&sid=b00ad5d8-0553-4fa0-ac5f-ccc6a160cb25%40sessionmgr110&hid=109&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=psyh&AN=2012-18012-002 I chose this article because I’ve always had an interest in brains and how it works and this article continued to build on the knowledge that I was gaining from reading the book and other articles.
http://web.a.ebscohost.com.proxy.lib.uni.edu/ehost/detail/detail?vid=6&sid=28ad658c-3e43-43e0-a3ee-8f0b399c597f%40sessionmgr4001&hid=4212&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=psyh&AN=1990-06606-001 I chose to use this article because it gave great information about the effects on auditory distance perception in humans and built off the ideas from the book that I found interesting.

1a) My topic is timbre.

1b) Timbre relates to the chapter because chapter 10 discusses how we are able to pinpoint where sounds come from, and how we tell them apart. Timbre is a method in which we can tell sounds apart from each other. Aside from that, it was spoken of in the chapter briefly.

1c) I am interested in timbre because the concept is really fascinating. It seems strange that we are able to tell sounds apart from each other despite the possibility that those sounds may have the same pitch and intensity. It goes to show how complicated sounds can be, being that there are so many methods that are needed to tell sounds apart.

2) In the wide world around us, sound is everywhere. Given the abundance of sounds, it is important to be able to tell the sounds and their sources apart from each other. Our first methods of perceiving differences in sound include noting the pitch and loudness of a particular sound. If two sounds share these characteristics in common, timbre is the next characteristic one uses. Timbre is also referred to as sound quality or sound color. Timbre is a term used to describe any characteristic of a sound, which is not pitch or loudness. For example, a sound from a cello may be “smooth,” while the name note from a tuba could be considered “metallic.” Another example may be that an old woman with many years of smoking in her history may sound raspy, while a young woman in good health may have a clearer voice.

Timbre is can, for the most part, be broken up into three areas. The first of these areas is the harmonic content. The basic building blocks of harmonic content are frequencies. Frequency is defined as the amount of sound waves repeated in a single second. When a sound is made, vibrations are created. These vibrations are then responsible for frequency. Every different sound source has its own unique frequency. When a sound source produces a sound, there are actually multiple frequencies. These multiple frequencies then join together to create harmonic series, or overtones. Every producer of sound creates different frequencies, as well as different combinations of frequencies. They, in turn, create their own unique overtones. These variations are part of what gives sound timbre. The next area that makes up timbre is called attack and decay. Attack is the initial onset of a sound, or where it is most intense. Decay is the term that describes the tapering off of a sound. If a noise gives rise to a sudden and strong attack, followed by a fast decay, this will sound much different that a noise with a weak attack and lengthy decay. When a guitar string is plucked, a relatively quick attack precedes a long and steady decay. On the other hand, crashing cymbals together causes an instant attack with a long decay as well. The difference lies in the fact that the cymbals cause a much faster attack. This difference is enough to help tell the difference between the two instruments. The third area that makes up timbre is called vibrato and tremolo. Vibrato is known as changes in the pitch of a tone in periodic succession. Tremolo is the same concept, only with loudness. Vibrato occurs in human voices, as well as some instruments. Vibrato and tremolo are able to add a certain richness to musical sounds in particular. These qualities act as a distinguishing characteristic to some sounds. Therefore, they act as yet another way to tell different sounds apart. When all these areas of timbre are combined, telling similar sounds and their sources apart becomes a relatively simple concept. For example, if we are to use the example of the plucking of a guitar string and the crashing of cymbals, we first see that the attack and decay are slightly different. Another difference lies in the frequencies of their sound waves. The sound waves produced by cymbals are much higher than the waves created by the guitar. Finally, there is also a difference with vibrato. This characteristic is more subjective than the others, but my ears can detect more vibrato with a guitar sound than with the cymbals. Using these differences, it is safe to say that I am capable using different aspects of timbre in order to tell a guitar and cymbals apart.

Terms: Timbre; pitch; loudness; sound; waves; sound color; sound quality; harmonic content; overtones; frequency; harmonic series; attack; decay; vibrato; tremolo; perceive; vibration

http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
I chose this source because it gave information on specific parts of what makes up timbre. This allowed me to understand some of the building blocks.

https://www.psychologytoday.com/blog/your-musical-self/201311/music-your-gps-voice-and-the-science-timbre
I chose this source because it served as a good baseline for what timbre is. Reading this let me understand the concept in an easy to read manner.

https://www.youtube.com/watch?v=144QVYv__S4
I really liked this source because it was a visual way to learn about timbre. Since I am a visual learner, this was helpful.

The topic I chose to do further research on was interaural level difference. This relates to the chapter, because the chapter talked about it in some depth. It also pertains to how humans localize sounds, which was one of the main themes for the chapter. This is interesting to me, because I am deaf in one ear so, while I cannot experience the full effect, I am able to turn my head to the direction that the sound is loudest to tell where it is coming from. This is one of the few ways I am able to localize sound, and even this usually does not work very well.

Interaural level difference is the difference in level intensity between a sound arriving at one ear versus the other. Sounds are more intense at the ear closer to the sound source because the head partially blocks the sound pressure wave from reaching the opposite ear. The properties of the ILD relevant for auditory localization are similar to those of the ITD. Sounds are more intense at the ear that is closer to the source, and less intense at the ear farther away from the source. The ILD is largest at 90 and -90 degrees, and it is nonexistent at 0 degrees (directly in front) and 180 degrees (directly behind). Between these two extremes, the ILD generally correlates with the angle of the sound source, but because of the irregular shape of the head, the correlation is not quite as precise at it is with ITDs. Although the general relationship between ILD and sound source angle is almost identical to the relationship between ITD and angle, there is an important difference between the two cues: the head blocks high-frequency much more effectively than it does low-frequency sounds. This is because long wavelengths of low-frequency sounds “bend around” the head in much the same way that a large ocean wave crashes over piling near the shore.
Interaural level differences provide salient cues for localizing high frequency sounds in space, and populations of neurons that are sensitive to ILDs are found at almost every synaptic level from brain stem to cortex. These cells get excited by stimulation of one ear and predominantly inhibited by stimulation of the other ear, such that the magnitude of their response is determined in large part by the intensities at both ears. In many cases ILD sensitivity is influenced by overall intensity, which challenges the idea of unambiguous ILD coding. There is a theory called the Duplex Theory that relates to explaining what interaural level difference is. This theory is Rayleigh’s explanation for the ability of humans to localize sounds by time differences between the sounds reaching the ear and differences in sound level entering the ears(ILDs). It states that ITDs are used to localize low frequency sounds, while ILDs are used in the localization of high frequency sound inputs. The frequency ranges for which the auditory system can use ITDs and ILDs significantly overlap, and most natural sounds will have both high and low frequency components, so that the auditory system in most cases will have to combine information from both ITDs and ILDs to judge the location of a sound source. A consequence of this duplex system is that it’s also possible to generate stimuli on headphones, where ITDs pointing to the left are offset by ILDs pointing to the right. So the sound is perceived as coming from the midline. A limitation of the duplex theory is that the theory doesn’t completely explain directional hearing. There’s no explanation given for the ability to distinguish between a sound source directly in front or behind. The theory also only relates to localising sounds in the horizontal plane around the head, and doesn’t take into account of the pinna used in localisation. Studies that have looked into hearing loss and interaural time differences found a trend for poor localization and lateralization in people with unilateral of asymmetrical cochlear damage. This is because of the difference in performance between the two ears. A study that I looked at if LSO neurons can signal small changes in interaural level differences of pure tones based on a discharge rate consistent with psychophysical performance in the discrimination of ILDs. Neural thresholds for ILD discrimination were determined from the discharge rates and associated response variability of single units in response to 300 ms tones in the LSO barbiturate-anesthetized cats using detection theory. Compared with psychophysical date, the best-threshold ILDs of single LSO neurons were comparable with or better than behavior over the full range of frequencies. This means that LSO does play a role in the extraction of ILD.

Terms: Interaural level difference, level intensity, sound, ear, intense, pressure, wave, auditory, localization, nonexistent, extremes, correlation, angle, high-frequency, low-frequency, long wavelengths, brain stem, detection, neurons, sensitive, psychophysical performance, excitatory, inhibitory inputs, left cochlea, right cochlea, contralateral ear, medial nucleus, optics, energy, motion parallax, Duplex Theory, stem, cortex.

http://jn.physiology.org/content/92/1/289 - I used this source, because it had a lot of specific and in-depth information. It also appeared to be a very credible source.

http://en.wikipedia.org/wiki/Interaural_time_difference - I used this source, because it provided a lot of background information in an easy to read format.

http://www.jneurosci.org/content/28/19/4848.full - I used this source, because it had a lot of specific and in-depth information and appeared to be a very credible source just like the first one.

(1a) My topic is ear abnormalities of the pinna and low onset ears.
(1b) This topic discusses how people respond to different head cues and have different shaped pinnae.
(1c) I am interested in it because I was born with an ear abnormality that makes my ear look like an elf ear. I tell people I wish bit by a dog, but the real story comes from me just being positioned at an awkward angle in my mothers stomach.

(2)Different ranges of complications can form due to abnormal development or deformities of the ear or pinna. Cosmetic issues to hearing development problems are most common. Some deformities are present at birth while others are acquired over time. Overall there is a wide variety of ear deformities: protruding ears stick out more than 2 cm from the side of the head, constricted ears is here the helical rim is either folded over, wrinkled, or tight. Microtia is basically an underdeveloped ear. Cryptotia is when the ear cartilage framework is buried beneath the skin on the side of the head. Anotia is the actual total absence of the ear itself. Then a Stahl’s ear is a pointy elf like ear shape where an extra cartilage fold is present in the ear. Ear tags are also an abnormality that exhibit a cleft like nature and consist of ski and cartilage. All these earlobe deformities come in a variety of shapes, including earlobes with clefts, duplicate earlobes, and earlobes present with skin tags.

Pinna abnormalities and low-set ears refer to abnormalities in the shape or position of the outer ear. The pinna forms when you are a baby growing in your mother’s womb. Organs develop at the same time as the pinna furthers its development. Certain abnormalities of the ear may signal the possibility of other related problems. The most common are cysts or skin tags. Variations in shapes of ears those is not uncommon and has been related to hereditary influences. Low set ears or abnormalities of unusually formed ears can cause down syndrome, turner syndrome, beckwith-wiedemann syndrome, potter syndrome, rubinstein taybi syndrome, and much more. Currently plastic surgery is the only option to treat these abnormality conditions. If the abnormality is so severe, cosmetic work and surgery might have to be performed.

In general, hearing is quite a complicated acoustic phenomenon! It can't be explained very simply, as the ear is fantastically complicated and many different parts combine to create its functionality. Ears come in different shapes and sizes. They don’t have to be identical. The left can be different than the right. Overall no ones are quite equally the same. Changing the ear can only be done during childhood since then the ear is just more malleable. Ears can actually be misshaped by the position your are in while in the mother’s womb. Hands, feet, toes, and much more can be different in relation to the right and left side of the body. This is normal and even though ear abnormalities seem embracing, your ear shape is actually quite regular.

(3) Pinna

(4) http://www.chop.edu/conditions-diseases/ear-deformities#.VRzXLDt4qZs
This source exhibited information on the different types of ear deformities and abnormal development.

http://www.nytimes.com/health/guides/symptoms/pinna-abnormalities-and-low-set-ears/overview.html
This source provided me with information on the symptoms and causes of ear deformities and low onset ears.

http://community.babycenter.com/post/a21315665/two_different_sized_ears
This cite provided me with information regarding abnormal ears and advice given to bloggers and the explanation of ear abnormalities.

1. I picked the topic of why people who have lower voices are harder to hear. This relates to chapter 10 because it covered timbre, which is part of why lower voices are harder to hear. I am interested in this topic because I wanted to know if it is common to struggle with hearing lower voices, or if I should be worried by the fact that I can’t understand a word Tom Hardy says.
2. If I was going to understand why lower voices are harder to hear, I had to find out how the voice works in the first place. The vocal box is located on/in the trachea which connected to the lungs. I thought it was interesting that the voice box kind of resembles a stage, with the vocal folds opening (when making noise) and closings (when silent) like old fashion curtains. These are the organs used when making sounds. There are three key steps to produce any kind of sound. First, the lungs have to make enough airflow which puts air pressure on to the vocal folds. Then, this air pressure vibrates the vocal folds. The vibration of the vocal folds is then combined with the efforts of the larynx (which is just the technical term for the voice box) to control the pitch and tone of the sound that is being produced. The larynx needs some help from articluators to fine-tune the sounds produced. Those articulators that are part of the vocal tract: tongue, palate, cheek, and lips. When all of these parts are working correctly, they produce all sounds made and when they work together- they control the tone of the sounds. When someone is not altering their voice on purpose using articulators, the voice others hear is called a modal voice, also known as that person’s default voice.
There is no special organ that people with lower voices have or don’t have. In fact, I only found two physiological differences that could impact the timbre of a voice. One was the size of the larynx itself, how the vocal folds move when making sound and the individual body. Males tend to have larger larynxes than their female counter parts. This explains why males are more likely to have lower voices as larger larynxes are thought to create a lower frequency when speaking. The only other physiological difference I found was that of how the vocal folds move when making sound (also known as the glottis). The vocal folds of men tend to bulge out. This is the opposite of the motion women’s vocal folds do to make sound, which is to move in and out in a linear fashion. The distinct difference between these motions creates the general rule that men will probably have lower frequency voices and women are more likely to have higher frequency voices. Hear, and when are they? There is also the physical shape of the body in question to consider when talking about voices. The size of a person’s chest and neck, along with a longer vocal tract impact how much vocal resonation a voice has. Vocal resonation is how much a voice projects. Male voices tend to be lower, which causes them to project more.
This is an example of a vocal register. Vocal Register is defined as the range of pitches (also known as voice types) a person can use. The highest range is called Whistle; this tone tends to be warm and not usually shrill. The next lowers is the Falsetto, this tone is warm. The next lowest is the modal, which I talked about above, but I thought it was interesting how this one over lapped with falsetto by one octave. The lowest one is called vocal fry (also known as glottalization). It is characterized by the voice becoming “creaky”. It is worth noting that, while some women’s modal voice genuine falls within this range, most does not despite a trend in females using this as their modal speaking voice. This can lead to a other’s having a problem correctly hearing those with low frequency voices when there is high frequency sound interfering, or if the person is not speaking clearly. This is because consonants tend to be under emphasized while vowels are emphasized more. And when half of sentence is lost in a loud crowd, it can be hard to hear a voice that would normally be easily heard (think Tom Hardy in Lawless as opposed to Benedict Cumberbatch as Sherlock).
But what’s the big deal about low voices? I found an interesting term to describe a common attribute of men with lower voices and it’s called Barry White Syndrome. No, it’s not bad to have this syndrome and be man because it means that women are more likely to find you attractive. this is not hard to image as many of Hollywood’s’ leading men are remembered partly because of their low voices (but their traditional good looks helped): Carry Grant, Gregory Peck, the Hemsworth brothers, Jensen Ackles, the boys I’ve listed above and I’m sure some women might be able to make a case for Morgan Freedman to be on this list too. This shows in a number of ways. In the western world, men with deeper voices have more chances at one night stands, and apparently have more children live into adulthood. Those two reasons could be explained by the theory that women , if you believe in evolutionary theory, apparently find men with deeper pitched voices to be more physically attractive because they are physically larger due to more testosterone, which means they would be better protectors, and have stronger sperm( though the sperm thing was disproven). The secondary sex trait is often construed as a sign of maturity, which could be a sign that they have got their life together and are more likely to provide a safe endowment for a child. I have my doubts about that last one still being applicable today, but I can see a cavewoman having some legitimate worries about that
This is because of four reasons.
3. terms: hear, timbre,, vocal box , trachea, lungs, vocal folds , sound, lungs, airflow, air pressure , vibrates, larynx , pitch, tone, Articluators, vocal tract, tongue, palate, cheek, lips, modal voice, default voice, physiological, timbre , lower frequency, glottis, linear fashion, higher frequency voices., vocal resonation , projects., vocal register, range , pitches, warm, Falsetto, Whistle, octave, vocal fry , glottalization, consonants, vowels, Barry White Syndrome, evolutionary theory, secondary sex trait , sperm
4. http://en.wikipedia.org/wiki/Human_voice I picked this site because it clearly explained how the voice is produced and altered. I used it for: steps of sound production, the bit about vocal resonation,
http://en.wikipedia.org/wiki/Vocal_register I picked this site because it clearly explained the differences in vocal registers.
http://www.ncbi.nlm.nih.gov/pubmed/2708686 I picked this article because it clearly explained the difference in how male and female voices sound and why they sound that way. I used it for the bit on the difference in how the vocal folds move.
http://en.wikipedia.org/wiki/Glottis I picked this article because it explained the difference in how the vocal folds move I used for the difference in how the vocal folds move.
http://en.wikipedia.org/wiki/Vocal_register I picked this article because it explained the vocal register. I used it for the bit on the vocal register. https://www.youtube.com/watch?v=w7BBNEwyOjw I picked this video because it explained a lot about vocal fry. I used it for vocal fry info.
http://www.quora.com/Why-are-people-with-deep-albeit-loud-voices-hard-to-hear-when-there-is-background-noise-for-example-in-a-nightclub I picked this article because it talked about the reasons low frequency voices are hard to hear.
http://blogs.scientificamerican.com/anthropology-in-practice/2012/01/03/the-barry-white-syndome-why-are-deep-voices-attractive/. I used this article because it explained some possible reasons for the Barry White Syndrome. I used it for info on the Barry White syndrome.

My topic for chapter 10 is timbre. Timbre relates to the chapter because chapter 10 discusses how we are able to pinpoint where sounds come from, and how we tell them apart which is something most of us probably haven't put a lot of thought about. Timbre is a method in which we can tell sounds apart from each other. I was interested in doing further research on timbre because the concept is really cool and can be related easily to every day experiences. It seems strange that we are able to tell sounds apart from each other despite the possibility that those sounds may have the same pitch and intensity (the definition of Timbre).

Timbre can be related to music and this is where I found the most understandable definition of Timbre. Sounds may be generally characterized by pitch, loudness, and quality. Sound "quality" or "timbre" describes those characteristics of sound which allow the ear to distinguish sounds which have the same pitch and loudness. Timbre is mainly determined by the harmonic content of a sound and the dynamic characteristics of the sound. Being able to listen to music and tell the difference in songs even if they have very similar pitch and tone the timbre of the music allows us to tell the difference. The idea of timbre is interesting and caught my attention but the next big question is how exactly does it work? What causes our auditory system to be able to notice and determine these changes? In my research I found this about timbre related to music instruments (the most talked about example) timbre is caused by the fact that each note from a musical instrument is a complex wave containing more than one frequency.For instruments that produce notes with a clear and specific pitch, the frequencies involved are part of a harmonic series but for other instruments the sound wave may have an even greater variety of frequencies.
We hear each mixture of frequencies not as separate sounds, but as the color of the sound (the blend that makes it different). Small differences in the balance of the frequencies which makes it seem like a different sound making us able to tell the difference in the music. Next the human ear and brain are capable of hearing and noticing very small variations in timbre (our ears are amazing this is proof).
A listener can hear not only the difference between different musical instruments but it can also tell the difference between two of the same instruments.

The general sound that one would expect of a type of instrument is usually called its timbre this is also sometimes called the color of the music. I really like this name for timbre because it is a descriptive word that really explains the definition of timbre.

This relates to what we are learning in sensation and perception because it is another way that our auditory system is able to "in depth" decipher between different sounds. Reminding us what we are capable of. If we couldn't tell the difference between multiple sounds the sounds that we hear wouldn't have as much of an important meaning to us when we hear it.

Understanding the origin of the word timbre, where it comes from, and what its meaning fully stands for is interesting to me as well. Obviously the term can be most likely related with music. Originally pronounced "tambour" it means: "The tone quality or the unique characteristic of a tone. Another word(s) for timbre is tone color." The timbre of a note produced on an instrument or sung by the human voice is determined in part by the size, design and make up of the instrument or vocal chords, what is being vibrated and by the way the sound is produced. Telling us that there are many different factors that make up the timbre and allow us to decipher the difference in the sound to determine the different notes or tones. Others considerations that influence the timbre of an instrument is what the instrument is made of. Wood vibrating has a different sound than air within a brass chamber vibrating. Many different things can effect the timbre, however, realizing that the timbre and how we can notice the difference in music is what makes the music beautiful and unique to us is what makes us love and have sensation toward the music we are hearing. I enjoy music and this term makes me appreciate all of the process that goes into the music we enjoy and often don't realize how unique it truly is.


Sites I used:
http://schoolworkhelper.net/timbre-quality-how-to-describe-it/ I liked this site because it helped to explain Timbre to me. Making it seem a little different from the other similar things such as pitch and tone.

http://grammar.yourdictionary.com/style-and-usage/descriptive-rds-for-music.html I liked this site because it took the time to explain how the process of timbre works and how our ears actually hear the different sounds. Explained the "color" of what we are hearing.

http://www.musicappreciation.com/lecture4.htm this may have been my favorite cite because it describes the origin of the word timbre, where its meaning began and where the word came from.

Terms: Timbre, pitch, loudness, sound, waves, sound color, sound quality, harmonic content, color overtones, frequency, harmonic series, attack, decay, perceive, vibration, sensation, perception.


As we learned in chapter 10, much of our perception depends on the physical anatomy. The size and shape of pinnae and the upper body give us cues to which direction sound is coming from. Obviously, everybody’s pinnae and upper body are different, but we are provided with infinite opportunities to adjust to them. We also know that sound perception is highly adaptive. Primary localization cues are time and intensity differences, which indicate how far away the source might be. Although we rely on these cues to determine the source of sound, when they change we can quickly adjust our perception to meet the new parameters.
We also learned a little about sound segregation in chapter 10. Auditory stream segregation refers to how we organize complex auditory input and identify the source of each sound. In the segregation process, we are able to identify a collection of frequencies as a harmonic sound, and groups the frequencies as being from the same sound event. Most of the sounds we hear in real life, including the human voice, are harmonic sounds. This means they are the combination of several pure sine waves. Harmonic sound means the sine waves are at interval multiples of the fundamental frequency. This reading reminded me of middle school choir, how each note of the score is either a part of the melody or the harmony (or both), and together the notes sang or played at once create harmonic sound. It turns out that a background in music may aid in sound segregation. Concurrent sound segregation is a specific type of auditory segregation which identifies the source of sounds that occur at the same time. Though both seem to be negatively impacted by the aging process. However, musicians have a better ability to identify a mistuned harmonic as a separate auditory event, and this advantage remains throughout the lifespan. The details of the research study indicate the advantage is due to practice rather than genetic differences. Musical training is also shown to have a positive impact on other areas involving perception and recognition. Despite these results, it should be noted that musical training is not known to slow the aging process with regard to these effects. In other words, musicians may have an enhanced ability to complete a given task over a non-musician of the same age group given the same task. The interesting phenomena at work here is that not all of these effects are related to music, but also language and many other areas of cognition. Are there some other types of training which might produce the same effect, or perhaps be able to slow the effect of aging on these mental capacities?
It seems that one task where musicians have a particular advantage over non-musicians is in the phenomenon known as harmonic enhancement. Among a melody, participants recognize a mistuned note long after it presents. When a notch is placed on the enhanced harmonic, participants will recognize the mistuned note more easily than when the notch is placed elsewhere on the melody. This phenomenon, known as harmonic enhancement, is presumably due to the attention the notch calls to the enhanced harmonic, therefore making it more easily recognized. More specifically, the notch earmarks the mistuned as not being a part of the same auditory event, which brings our attention to the enhanced harmonic. When the notch is not placed on the enhanced harmonic, participants have no advantage in recognizing the mistuned note. As musicians have practice recognizing the mistuned note, they have a better chance of finding it than non-musicians regardless of where the notch is placed.
Further research might explore reasoning for the lag in participant’s recognition of the mistuned note. Current theories consider the comparison of the primary harmonic, with the main effect caused by distance after the original within the melody.
Pinnae, sound perception, harmonic sound, sine wave, fundamental frequency, auditory stream segregation, concurrent sound segregation, localization cues, time and intensity differences, notch,


http://eds.b.ebscohost.com/eds/detail/detail?vid=1&sid=998445ee5-aa5b-4e00-94c8-c6dcc6aeaa7f%40sessionmgr114&hid=108&bdata=JnNpdGU9ZWRzLWxpdmU%3d#db=edselp&AN=S0888327013006365
http://download-v2.springer.com/static/pdf/994/art%253A10.3758%252Fs13414-014-0826-9.pdf?token2=exp=1427949717~acl=%2Fstatic%2Fpdf%2F994%2Fart%25253A10.3758%25252Fs13414-014-0826-9.pdf*~hmac=971207b7b25521fdf9e3cf75af77ff9fa7abb343154f406400349790d71f73ba
http://eds.b.ebscohost.com/eds/pdfviewer/pdfviewer?vid=6&sid=998445e5-aa5b-4e00-94c8-c6dcc6aeaa7f%40sessionmgr114&hid=108

For this week’s Topical Blog assignment, I chose to do more research on source segregation or auditory scene analysis. Both have the same definition, just two different ways to look at it. The definition of these two terms is processing an auditory scene consisting of multiple sound sources into separate sound images. An excellent example the book gave was, imagine you are at party, you and your friend are conversing, but at the same time, your auditory system is picking up conversations from others at the party, music playing in the background, people chewing, etc. For an auditory scene, the situation is greatly complicated by the fact that all the sound waves from all the sound sources in the environment are summed together in a single complex sound wave. Our cochlear hair cells have to differentiate between all the voices and commotion.

The above definition and example are often referred to as the “Cocktail Party Effect”. This “effect” is when we can “select out” and attend to one conversation even when many are present simultaneously. This was first document by Colin Cherry in 1953. Colin Cherry was a British cognitive scientist whose main contributions were in focused auditory attention, specifically the Cocktail Party problem regarding the capacity to follow one conversation while many other conversations are going on in a noisy room. Cherry used shadowing tasks to study this problem, which involve playing two different auditory messages to a participant's left and right ears and instructing them to attend to only one. The participant must then shadow this attended message. Cherry found subjects that couldn’t identify a single phrase from a non-attended ear, couldn’t say for sure if it was in English, didn’t notice change to German, didn’t notice speech being played backwards. He did however find that the subjects noticed difference from male to female and vice versa. Overall this study concluded that we can easily use spatial, timing, and spectral cues to separate sound streams, but we cannot attend to multiple streams at the same time.

A Japanese research team exposed a number of individuals to test sounds and background noise in one or both ears while monitoring their brain activity. The recorded brain activity indicated greater activity in the left half of the brain when discriminating sounds from noise. In other words, the cocktail party effect occurs in the left side of the brain. As of yet, the researchers are unable to determine why hearing impaired people's ability to discriminate sounds from noise is diminished and reduce the cocktail party effect.

Revealing how our brains are wired to favor some auditory cues over others it may even inspire new approaches toward automating and improving how voice-activated electronic devices filter sounds in order to properly detect verbal commands. How the brain can so effectively focus on a single voice is a problem of interest to the companies that make consumer technologies because of the tremendous future market for all kinds of electronic devices with voice-active interfaces. While the voice recognition technologies that enable such interfaces as Siri have come a long way in the last few years, they are nowhere near as sophisticated as the human speech system.
An average person can walk into a noisy room and have a private conversation with relative ease, as if all the other voices in the room were muted. Speech recognition is something that humans are remarkably good at, but it turns out that machine emulation of this human ability is extremely difficult.

References:
http://en.wikipedia.org/wiki/Colin_Cherry I chose this website because it gave me information of Colin Cherry, the man that invented the “Cocktail Party Effect” test.

http://www.hear-it.org/-The-cocktail-party-effect-How-the-brain-filters-noise I chose this website because of the information on research of the “Cocktail Party Effect”

https://www.youtube.com/watch?v=kW86cDBZNLo I chose this video because it gave a descriptive overview of the structures of your brain used during attention.

TERMS: source segregation, auditory scene analysis, auditory scene, sound images, sound wave, cochlear hair cells, “Cocktail Party Effect”, Colin Cherry, spatial, timing, spectral cues, sound streams, auditory cues,


Interaural Time Differences

This relates back to the chapter because its about the time between two ears and how we understand sound. sound is a very complex and interesting thing. something we still don't understand 100%. this goes into detail about how sound is heard in our head. Also talks about how different hearing aids are used to help us hear. and the different sound waves between our two ears and how the acauer sounds and don't mess with our heads to much.


http://en.wikipedia.org/wiki/Interaural_time_difference
https://www.youtube.com/watch?v=CuYNFv2Oc08
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2644391/

Interaural time difference is the difference of arrival time of a sound between two ears and can help locate sounds. It is the pathlength difference which aids in the process of identifying the direction of sound source. Users of a cochlear implant in one year and a hearing aid in the other resulted in subjects who could not lateralize the sounds consistently. The sound waves reach the ear closer to the sound first and the human ear uses the time difference and the level difference to locate the sound source. Normal hearing subjects can tell the distance in low-frequency sounds, but this process breaks down with high frequencies.
Terms: interaural time difference, pathlength, cochlear implant, sound waves, low and high frequencies

Leave a comment

Recent Entries

Week #14 Online-Line assignment (Due Thursday)
You should already have a product or service picked out for your final. Please do some in-depth research on the…
Week #13 Online Assignment - Extra credit (Due Monday)
Topics in the News?What I would like you to do is to start applying what we are learning in class to…
Week 12 Online assignment (Due Monday)
Please go to the following site, poke around and find something interesting to you and write about it from a…