What I would like you to do is to find a topic from the chapter you read for Monday that you were interested in and search the internet for material on that topic. You might, for example, find people who are doing research on the topic, you might find web pages that discuss the topic, you might find youtube clips that demonstrate something related to the topic, etc. What you find and use is pretty much up to you at this point. But use at least 3 sources.
Once
you have completed your search and explorations, I would like you to
say what your topic is, how exactly it fits into the chapter, and why
you are interested in it. Next, I would like you to take the information
you found related to your topic, integrate/synthesize it, and then
write about it. At the end, please include working URLs for the three
websites.
Once you are done with your post make list of the terms and terminology you used in your post.
Let me know if you have any questions.
For my topical blog this week I decided to further explore timbre. Timbre seemed complex and I still believe that it is but I feel that I better understand it now due to my research. Timbre is the psychological phenomenon that a person experiences when hearing a musical note that sounds different even though it came from the same singer or musical instrument. Basically if Christina Aguilera sang a particular note and then sang that same note again later it would sound different than before. In addition, if a flute produced a specific not and a harp created the same note once again the two notes would still sound different and our brain can easily detect this change. This phenomenon is a term in our chapter for this week and is an important part of the chapter because music is one of the well-known human auditory experiences. Most individuals encounter music on a daily basis, whether that be for pleasure at home, in the car, or in the grocery store while shopping.
A single musical note is a highly complex structure made up of vibrations. The timbre also conveys the tone color and quality of a musical note. The fundamental frequency of a note is the slowest rate in which the sound vibrations are conveyed. This is important for our perceptual experience of sound because the fundamental frequency is the loudest. The other sounds that you hear, which help us distinguish one note from another could be harmonics, overtones, and inharmonics. All of these occur simultaneously when a note is produced. In order to create a note that the human ear can hear at least three harmonics must be accessed.
Researchers at MIT (musical Interface Technologies) have made it their mission to preserve timbre because it is easy to mess up and destroy musical quality with bad equipment. They try to do this by creating cables that allow for superb timbre and musical quality. There most recently cables are the Oracle MA-X cables which run about 4000$. Apparently high quality timbre is an important component to have.
Terms: timbre, auditory experience, vibrations, tone color, fundamental frequency, harmonics, overtones, and inharmonics.
Sources: http://en.wikipedia.org/wiki/Timbre
http://cnx.org/content/m11059/latest/
http://www.youtube.com/watch?v=BLoM9bBr8lc-
People (musicians) spend so much money to get their sound just right. Once they get good, they spend a bunch on this stuff. Very important.
While reading chapter 10, I found the concept of timbre to be especially interesting to learn about because I think it’s amazing that our auditory system can judge that two sounds that have the same loudness and pitch are dissimilar. Sound quality, or timbre, is conveyed by harmonics and other high frequencies. Timbre is a general term for the distinguishable characteristics of a tone. Timbre is mainly determined by the harmonic content of a sound and the dynamic characteristics of the sound such as vibrato and the attack-decay envelope of the sound. Some investigators report that it takes about 60 ms to recognize the timbre of a tone, and that any tone shorter than about 4 ms is perceived as an atonal click. The ordinary definition of vibrato is "periodic changes in the pitch of the tone", and the term tremolo is used to indicate periodic changes in the amplitude or loudness of the tone. So vibrato could be called FM (frequency modulation) and tremolo could be called AM (amplitude modulation) of the tone. Actually, in the voice, or the sound of a musical instrument, both are usually present to some extent. Vibrato is considered to be a desirable characteristic of the human voice if it is not excessive. It can be used for expression, and adds richness to the voice. If the harmonic content of a sustained sound from a voice or wind instrument is reproduced precisely, the ear can readily detect the difference in timbre because of the absence of vibrato. Timbre is what makes a particular musical sound different from another, even when they have the same pitch and loudness. For instance, it is the difference between a guitar and a piano playing the same note at the same loudness.
Loudness and pitch are easy to describe because they correspond well to simple acoustic dimensions, which we learned are amplitude and frequency. However, the richness of the complex sounds is dependent upon more than simple sensations of loudness and pitch. A trombone and a tenor saxophone might play the same note (same fundamental frequency) at exactly the same loudness (sound waves have identical intensities), but a person would have no trouble discerning that two different instruments were being played. The perceptual quality that differs between these two musical instruments, as well as between vowel sounds like in the words hot, heat, and hoot, is referred to as timbre. Differences in timbre between musical instruments or vowel sounds can be estimated closely by comparison of the overall spectra of two sounds overlapping. Thus, timbre must be involved with the relative energy of spectral components, and perception of timbre depends on the context in which a sound is heard.
The way a complex sound begins, called the attack of the sound, and ends, called the sound’s decay, is another important quality. Auditory systems are sensitive to attack and decay characteristics. Audible sounds have a natural attack and decay curve, called the envelop. During attack, the volume of the sound increases, and during decay, the volume decreases. When a sound is reversed, the attack becomes the decay and the decay becomes the attack.
Terms: timbre, auditory system, loudness, pitch, sound quality, harmonics, frequencies, tone, harmonic content, vibrato, attack, decay, envelop, tremolo, frequency modulation, amplitude modulation, sustained sound, acoustic dimensions, amplitude, complex sound, fundamental frequency, spectra, spectral components
Sources:
http://en.wikipedia.org/wiki/Timbre
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
http://www.mat.ucsb.edu/~b.sturm/MAT201A/presentations/Fri/OhnandPark.pdf
Images of attack and decay:
https://ccrma.stanford.edu/software/snd/snd/pix/hairy6.png
http://docstore.mik.ua/orelly/web2/audio/figs/aud.0208.gif
http://www.audiomulch.com/images/blog/southpole-expedition-part-3-pattern-sequenced-adsr-envelopes-adsr-timing.png
I think what is great about Timbre is that we take it for granted, but if we didn't have it or have different instruments that had different sound qualities to produce the music we like, the world would be a very dull place.
For my topical blog I am going to discuss more about the head-related transfer function or HRTF. This is a function that describes how the pinna, ear canal, head, and torso change the intensity of sounds with different frequencies that arrive at each ear from different locations in space, azimuth, and elevation. As humans, we have two ears, but we can locate sounds in three dimensions. These dimensions include range, in direction above or below, in front or to the rear, and from side to side as well. This is possible because the brain, inner ear, and the external ears work together to make inferences about the location of sounds. Humans estimate the location of a sound by taking cues derived from one ear and by comparing cues received by both ears. Among difference cues are time differences in arrival as well as differences in intensity. The HRTF is called the fourier transform of HRIR. HRIR is the impulse response called head-related impulse response. The HRTF is also sometimes known as the anatomical transfer function or ATF. The HRTF can also be described as the modifications to a sound from a direction in free air to the sound as it arrives at the eardrum. These modifications include the shape of the listeners outer ear, the shape of the listeners head and body, the acoustical characteristics of the space in which the sound is played etc. All of these characteristics will influence how a listener can determine where the sound is coming from.
HRTF describes how a given sound wave input is filtered by the diffraction and reflection properties of the head, pinna, and torso before the sound reaches the eardrum and inner ear. Linear systems define the transfer function as a complex mix of ratios between the output signal spectrum and the input signal spectrum as a function of frequency. The HRTF is involved in resolving the cone of confusion. This is a series of points where ITD and ILD are identical for sound sources from many locations around the zero point of the cone. When a sound is received by the ear it can either go straight down the into the ear canal or it can be refracted off the pinnae of the ear, into the ear canal a fraction of a second later. The sound will contain many frequencies, therefore, many copies of the signal will go down the ear all at different times depending on their frequency. These copies will then overlap eachother, and during this, certain signals are enhanced while other copies of the frequency are just cancelled out. Essentially speaking, the brain is just looking for certain frequencies that will help determine where the sounds are coming from.
TERMS: cone of confusion, head, pinna, torso, eardrum, inner ear, HRTF. ITD, ILD, pinnae, ear canal, azimuth, elevation, HRIR, ATF,
http://en.wikipedia.org/wiki/Head-related_transfer_function
http://electronics.howstuffworks.com/virtual-surround-sound2.htm
http://505606.pbworks.com/f/HRTF.pdf
Ok. Sounds good.
This week I chose to look more into the topic of attack and decay of sound. Something as easy and simple as the start and ending of a sound could be so important not just in hearing but how we may hear the sound itself. Sound itself is the vibration of air molecules or variation in air pressure that can be sensed by the ear, it’s the pattern and rate of audible vibrations that give everything its unique sound. Loudness, the perceptual strength or weakness of sound waves resulting in pressure change accounts for uniqueness. Also pitch, psychoacoustic term for how low or high a sound is perceived by the human ear, and timbre, more or less the tone of the sound accounting to unique pressure changes as well. The one’s I’m more interesting in is attack and the decay makes sound unique.
Attack is the part of sound during which amplitude increases, or the onset of the sound. There are actually two types of attack; slow and fast. Fast implies that the closer the attack of sound is, the faster the attack. Examples of these are gunshots, fireworks, and doors slams etc, the things that would make a normal person jump more or less in surprise. Slow attack, is that sounds that have a slow attack take longer to build to the sustain level. A slow attack would be the short warning growl from a dog before he parks, slowly tearing a sheet of paper, another example being the entirety of a thunderclap. Decay is the part of sound during which amplitude decreases or its offset. The time it takes for the sound vibrations to revert back to silence is called decay time, and how gradual it decays is called the rate of decay. By the rate of decay and how long it takes to revert back to silence can help a person identify if they are indoors or outside. Outdoors with long decay and an echo, while indoors would have little decay and low to no reverberations.
All of this pitch, timbre, loudness, attack and decay, aspects of sound that for us as we hear seem simple enough. However, looking into it more there are people that are able to manipulate such perception of hearing to catch our attention. Musicians and artists are gifted individuals that can manipulate sound into what we call music. All the while as I listen to my music the creators were using all those traits of sound into music I like. I have never thought to it that way but amazing what you learn about your perception when you take a class on it.
Terms: sound, attack, decay, pitch, loudness, timbre
http://www.musicarrangers.com/star-theory/t08.htm
http://docstore.mik.ua/orelly/web2/audio/ch02_01.htm
http://www.filmsound.org/articles/ninecomponents/9components.htm#attack
Amazing. I like your insight into the music that you like. You start wondering "why do I like this" after you learn about it. The qualities of certain tones, timbre, and keys hit some people the right way and make them especially interested in that type of music.
After reading chapter 10 I became interested in timbre. This concept was interesting to me because we can determine the difference between two similar sounds. The two or more sounds have the same loudness and pitch, but we can determine the difference in the sounds; this is primarily what we know as timbre.
Timbre allows our ears to determine the differences between two or more sounds that are similar. Although the sounds have the same pitch and loudness, and also the same fundamental frequency they do not have the same intensity. An example of this would be when a violin and a guitar playing the same note, each note has the same loudness and pitch, but we can still determine they are different sounds. One of the main reasons we can determine the difference between the similar sounds is because they have different harmonic content. The harmonic content is part of the fundamental frequency, but it is basically the intensity of the sound, the inner ear is able to distinguish this difference.
Along with the harmonic content being a part of timbre, we can also determine the difference in the sounds because the attack and decay of the sounds are different. Although the sounds are similar the attack and decay are different. As humans our auditory system is very sensitive to attack and decay. The attack of a sound it how the sound starts and the decay of the sound is how the sounds ends. Depending on the sound they can have short or long attacks or short or long decays. Attack and decay really help us determine the different characteristics or a sound.
After researching timbre I found it was discovered and understood in China. They found that when playing an instrument you could play it different ways and different sounds would come out. They would play the same note but what made the sound different was if they manipulated how the note was played, like plucking, hitting or scraping the instrument.
Also after I researched timbre I found it is primarily linked to music, but also to voices. It is mainly linked to music because many sounds that are produced can have the same loudness, quality, and pitch. Also we can determine the difference in the same notes played by different instruments very quickly. In my research I found it can be as fast as 40ms. It amazes me that our auditory system can work so quickly.
Terms: timbre, pitch, loudness, auditory system, inner ear, fundamental frequency, harmonics, attack and decay
http://psychology.wikia.com/wiki/Timbre
http://en.wikipedia.org/wiki/Timbre
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
It seems crazy that it all works so quickly. Sometimes its hard to believe that is how fast it works. We don't really become aware of this during our auditory experiences. Once we learn about it, it still seems that it doesn't enter our awareness all that often.
After reading chapter 10, I found the topic of complex sounds to be really interesting. As we know, the simplest sound wave is a sine wave. This is created by a source of vibrating in a harmonic motion. This is simple enough but as we know, nothing in our environment produces a simple sine wave but instead they complex sound. Natural sounds vibrate at a combination of frequencies producing more complex waveforms. These complex wave forms comprise of many sine waves of differing frequencies that interact and interfere with one another.
There are several ways to measure a sound wave. If we are measuring a wave’s amplitude, we are measuring the loudness or size of pressure differences. This is usually measured in decibels (dB). Another way to measure sound waves is through its wavelength. And finally we can measure sound by its frequency (pitch) which is measured in cycles per second (Hz). The shorter the wavelength the higher the frequency and the longer the wavelength the lower the frequency.
Harmonics is another important aspect to a complex wave. Sounds with a definite pitch with each individual vibration (sine wave) are a harmonic. In a complex sound, the vibration with the highest amplitude and lowest frequency is known as the fundamental frequency or 1st harmonic. Additional harmonics, each with lower amplitudes and higher frequencies are named the 2nd, 3rd, etc harmonics. Sounds with a pitch are distinguished by a repetitive waveform. No matter how complex it repeats itself.
The relationship of harmonics creates the character of a sound and is known as its timbre. Natural sounds produce large complex waveforms made up of hundreds or thousands of individual harmonics. We as listeners can judge that two sounds with the same loudness and pitch are dissimilar. The primary contributors to the quality of timbre attack and decay. The attack is when a sound rises to peak amplitude. This could be illustrated by plucking a guitar. The decay is long and gradual and the sound wave slowing goes from large to nothing. The vibrato is another contributor to the quality of timbre in a complex sound. It is used for expression and adds richness to the voice. All sounds that are considered noise generally have little to no harmonic content. Musical sounds have a highly refined harmonic structure that is more pleasing to a person’s ear.
Pure tones are common in nature. Most things vibrate in more than one mode simultaneously. The first harmonic is the dominant tone.
The reason I find this so interesting is because these complex sounds are all around us. We are constantly being exposed to a multitude sounds at one time and it’s crazy to think are brain can tell them apart. I think it’s very vital to the class to understand how these sounds work and what we are really hearing when your alarm goes off or when someone plays a guitar etc.
Terms- harmonic, complex sound, timbre, vibrato, decay, attack, fundamental frequency, decibels, frequency, pitch, wavelength, sine waves, amplitude pure tones.
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
http://www.planetoftunes.com/sound/complex.html
http://home.cc.umanitoba.ca/~krussll/138/sec4/acoust1.htm
http://www.physics.isu.edu/~hackmart/soundwavesIIengphys.pdf
Good post. I think we share that amazement about just how much is going on around us in those auditory situations and how little we realize about the complexity of it all!
The topic that I found interesting and wanted to know more about was the psychological sensation know as timbre, or also known as tone color in the music world. Timbre, according to dictionary.com, is the characteristic quality of a sound, independent of pitch and loudness, from which its source or manner of production can be inferred.
This fits into the chapter because it has to do with our hearing perception, and it was briefly talked about in chapter 10. It also fits in to the chapter because it is a hearing phenomena that goes unnoticed at times, but is a very important feature in our hearing.
I found this interesting because it was something that you notice daily, but it always makes me wonder how you can decipher the different sounds that are so similar. I am a big fan of music and this is where timbre is most recognized, due to the use of various different instruments and notes, so I wanted to know how this all worked.
Timbre, according to Wiki, consists of five key elements 1) The range between tonal and noise like character. 2) The spectral envelope. 3) The time envelope in terms of rise, duration, and decay. 4) The changes both of spectral envelope and fundamental frequency. 5) The prefix, an onset of a sound quite dissimilar to the ensuing lasting vibration.
Upon further research there really isn’t too many interesting things to say about timbre, and may have sounded a little more interesting than it really was. Timbre was a very dry and to the point topic, and maybe not the most interesting thing to read about.
Terms- Timbre, pitch, loudness, hearing perception, spectral envelope, and fundamental frequency.
http://dictionary.reference.com/browse/timbre
http://en.wikipedia.org/wiki/Timbre
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
Ok, good stuff, glad you found the different qualities interesting.
For this week’s topical blog, I decided to do further research on the various aspects of auditory stream segregation. As you would be likely to expect, there is a great deal of scientific research indicating that musicians generally possess better auditory perceptual skills and working memory. This is due primarily to the fact that musicians spend hours upon hours training their brains to organize complex acoustic signals into separate auditory events, honing this cognitive process known as auditory stream segregation. This process is further enhanced for those of musical prowess because they spend so much time not just separating and isolating certain auditory streams, but also analyzing the relationships between these streams, or harmony. This ability has been nicknamed “the cocktail party effect” in that it is a sort of talent required to discern a specific conversation from among several different speakers and other background noise. All individuals with musical experience are said to display enhanced abilities in this regard, but it is even more important for musicians such as conductors and organists. This is because conductors often must focus on around three different types of instruments at once while segregating each of their independent melodic lines. Similarly, an organist develops the ability to stream up to five different melodic parts at once, since they frequently interweave melodies using all four of their limbs over three separate keyboards.
However, the benefits of musical training do not only apply to the musical realm alone. It has been determined that musicals also experience increased mental representations of acoustic aspects important in both vocal communication and the neural encoding of speech. Those with melodic experience also display similar proficiency in the areas of verbal ability, verbal working memory, and verbal recall. One study I found attempted to test the relation between these abilities in a rather interesting way. Since musicians generally display better auditory perceptual skills and some increases in linguistic abilities as compared to non-musicians, it was hypothesized that musicians would therefore perform better on speech-in-noise (SIN) tests than their non-musical counterparts. As expected, musicians performed better than non-musicians on two different varieties of hearing-in-noise perceptual tests. This is attributed to their enhanced capacities for auditory stream segregation, but the precise cognitive and psychoacoustic factors used to contribute to this ability are largely yet unknown.
Another question I had regarding auditory stream segregation is how these sorts of processes are affected by visual stimuli. In class, we noticed that the rapid garbled visual cuts in the Obama/Eminem mashup video seemed to make it more difficult to focus on and interpret the lyrics than when we just listened to the isolated audio track by itself. With the short clips profiling McGurk effect, there seemed to be an similar phenomena – the misleading visual lip movements caused even the most adept listeners to misinterpret what was being said! Therefore, a question I had was whether or not visual feedback could be used to enhance aural perception of music as it does with the normal lip movements involved in speech production. The study I discovered took a similar approach to the previous speech-in-noise research project by testing musicians and non-musicians. These researchers found that not only do visual cues have an effect on auditory streaming, but they also appear to reduce the difficulty of separating a melody from background notes.
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011297
http://www.soc.northwestern.edu/brainvolts/documents/ParberyClark_2009.pdf
http://en.wikipedia.org/wiki/Auditory_scene_analysis
Terms: auditory stream segregation, working memory, cocktail party effect, speech-in-noise (SIN), McGurk effect
Good post. I like the link between the qualities of performance and WM and other cognitive abilities. It seems like they would have to have a decent memory before the skills become procedural and very automatic.
I chose the topic of timbre because it wasn’t something I knew much about and just wanted to know more about. Everything else in the chapter made sense and didn’t raise many questions. Timbre is when someone is able to tell two sounds apart from one another even though both sounds have the same pitch and are at the same volume. This fits into the chapter because even though our ear is picking up many different sounds, the brain is able to differentiate one sound from the next.
The first thing I learned was that timbre is also known in the music world as the ‘color of music’. Since many different instruments can play the same note, it is the timbre that sets them apart when someone is listening to the note. The timbre or ‘color’ can be changed by changing the undertones of the sound being produced. These changes can result in the creation of sounds that can point to a mood or feeling the way colors usually do. Honestly I thought that when an instrument played a different note that that was how we were able to tell the difference. Thinking back on it, the ‘color’ explanation makes a lot of sense. I don’t listen to a lot of orchestra pieces often but ‘timbre’ made more sense after doing a little of my own research.
http://cnx.org/content/m11059/latest/
http://www.learner.org/resources/series105.html?pop=yes&pid=1243
The video I watched for this site is under 7. Timbre: The Color of Music. One warning: this video is close to 30 minutes long! I also found through my research that this video will be played on Iowa Public Television on Monday, April 9th at 3 a.m.
http://carillontech.org/timbre.html
I added this one in because I liked the picture of what a note from an instrument looks like on a graph that is studying undertones.
Terms: timbre, pitch, volume, and undertones.
Ok, good deal.
My topic is the physiology of both medial superior olives and lateral superior olives. I’m interesting in this subject because I love anatomy and I think this is the backbone of the portion of the auditory system responsible for calculating interaural time differences. It ties into the chapter by understanding the physiology of our auditory system. To first understand what medial superior olives (msos) and lateral superior olives (lso) are we must understand what they do. The medial superior olive is a specialized nucleus that measures the time difference of arrival of sound between ears. The lateral superior olive is involved in measuring the difference in sound intensity between the ears. What makes neurons in the LSOs extremely sensitive to differences across the two ears is the competition between excitatory inputs from one ear and inhibitory inputs from the other ear.
Two scientists, T.C.T Yin and Chan found certain neurons in the medial superior olive firing rates increase in response to very short time differences between inputs from the different ears on cats. Interaural level difference is the difference in intensity level between sound arriving at one ear compared to the other ear. Sounds are more intense at the ear that is closer to the sound source because of our head shape. Our head partially blocks the sound pressure wave from reaching the opposite ear.
Terms: Medial superior olives, lateral superior olives, interaural time difference, neurons, nucleus, excitatory inputs, inhibitory inputs.
http://en.wikipedia.org/wiki/Superior_olivary_complex
http://en.wikipedia.org/wiki/Interaural_time_difference
http://jn.physiology.org/content/92/1/289
My topic is the physiology of both medial superior olives and lateral superior olives. I’m interesting in this subject because I love anatomy and I think this is the backbone of the portion of the auditory system responsible for calculating interaural time differences. It ties into the chapter by understanding the physiology of our auditory system. To first understand what medial superior olives (msos) and lateral superior olives (lso) are we must understand what they do. The medial superior olive is a specialized nucleus that measures the time difference of arrival of sound between ears. The lateral superior olive is involved in measuring the difference in sound intensity between the ears. What makes neurons in the LSOs extremely sensitive to differences across the two ears is the competition between excitatory inputs from one ear and inhibitory inputs from the other ear.
Two scientists, T.C.T Yin and Chan found certain neurons in the medial superior olive firing rates increase in response to very short time differences between inputs from the different ears on cats. Interaural level difference is the difference in intensity level between sound arriving at one ear compared to the other ear. Sounds are more intense at the ear that is closer to the sound source because of our head shape. Our head partially blocks the sound pressure wave from reaching the opposite ear.
Terms: Medial superior olives, lateral superior olives, interaural time difference, neurons, nucleus, excitatory inputs, inhibitory inputs.
http://en.wikipedia.org/wiki/Superior_olivary_complex
http://en.wikipedia.org/wiki/Interaural_time_difference
http://jn.physiology.org/content/92/1/289
Right on.
There were two categories specifically interesting to me where I wanted to learn more information. These two terms are both equally important in understanding our hearing within the environment. So many different sounds include harmonics; they are one of the most common kinds of sounds. The absolute lowest frequency of a harmonic is the fundamental frequency. The fundamental frequency of harmonics is interesting and important to hearing because the auditory system is somewhat sensitive between harmonics. When the fundamental frequency harmonic is removed, the pitch is still interpreted even though the harmonics actually start with the 2nd, 3rd, 4th, and so on harmonic. This means harmonic properties are periodic at the fundamental frequency level. While harmonics are seen everywhere in the environment, musical instruments is one popular way that harmonics are expressed. In regards to musical instruments there are some misleading thoughts about harmonics and overtones. The difference is harmonics and overtones express opposite numbers when counting. If harmonics are even then the overtones are odd numbered.
Timbre is also another concept that is important for the understanding of sensation and perception through the process of hearing in our environment. With a timbre, we are able to distinguish between two different sounds, though their loudness and pitch are similar. Take two different musical instruments in an orchestra for example. If they are both played a note with the same fundamental frequency, loudness and pitch, as humans, we are still able to identify that there are two different instruments simultaneously performing. This is the phenomena of timbre. Perceptions of timbre are influenced by factors such as environment. Different surfaces and surroundings reflect timbre, such as higher frequency timbre is more greatly reinforced on hard surfaces. Timbre and harmonic content of a sound are greatly influential of one another. Two of the primary sources of these complex sounds in music are attack and decay. Attack refers to the beginning of sound where amplitude is increased. Decay, on the other hand, refers to the end of sound where amplitude is decreased. Attacks are very quick rapid rises to the peak of amplitude, where decay is much more gradual and prolonged until it completely declines in amplitude. Both attack and decay are sensitive to the ear.
I believe that both the concepts of harmonics and timbre are essential in expanding on our understanding of how we use our auditory system to understand different variations of sound in our environments. In order to give some auditory examples I included to working URL videos from youtube to better allow for the understanding of harmonics and timbre. The first video shows a very interesting video of harmonics where salt becomes various shapes and makes different designs on a vibrating table. It is a very interesting video that reflects concepts of the auditory system and relates to censation and perception. The second video is based on the concepts of attack and decay. Listening to the various sounds of attack and how the prolonged sound lowers in amplitude to a complete stop shows the concept of decay.
Terms: Fundamental frequency, harmonics, timbre, attack, decay.
http://www.youtube.com/watch?v=sOMiowrff0Y
http://en.wikipedia.org/wiki/Harmonic
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
http://www.youtube.com/watch?v=ihsqM0MLRLg
Cool videos. They get your point across. Good choices.
The most interesting thing to me in the chapter was timbre. After reading the chapter, I still had questions and wanted to research it more. Timbre is what enables people to distinguish sounds even when they have the same pitch, length, and loudness. It can also be refered to as tone color or tone quality. It is caused by a sound having a complex wave with more than one frequencies. Attack and decay also help listeners distinguish the sounds as well as vibrato. Vibrato is also known as frequency modulation. Timbre allows listeners to hear the difference between drums and a guitar but also allows the listener to hear the difference between two different guitars. It takes listeners about 60ms to hear the recognize the differences in timbre between two different sounds.
I think timbre is important to understanding senstation and perception because it is very important to hearing. It allows people to hear the differences in sounds without differences in pitch or volume. Knowing how people are able to understand sounds enables listeners to better appreciate and comprehend sounds.
Terms: timbre, complex wave, attack, decay,
http://en.wikipedia.org/wiki/Timbre
http://cnx.orgcontent/m11059/latest/
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html/
http://www.timbreproductions.com/pages/home.html
Definitely getting to be the end of the semester, eh? I completely get it!!!
I chose to research more about Timbre. I am really interested in this topic because I have been playing the cello (currently am playing in the UNI orchestra) for 12 years now. I really enjoy the sound of instruments and to get the chance to really understand timbre is really cool to me!
Timbre is what makes one note, or sound, sound different than another. It is what makes it interesting to the ear to listen to a symphony orchestra, for example. There are so many different kinds of tone qualities in the vast amount of instruments, and the ear, can distinguish between the sounds with Timbre. It can do this from different frequency waves. The slower the frequency, the louder the noise. Other frequencies are known as harmonics, overtones, or inharmonics. It is said that one only needs 3 or more harmonic frequency waves to hear a tone, although wouldn't produce quality timbre.
In a video I have posted (the second link) there is an orchestra playing a D minor scale medley. If you listen closely you should be able to pick out all the instruments (violin, viola, cello, and bass). This is a fun experiment to do with timbre. You can do this with any group of instruments, and I find it most interesting to do it with a symphony orchestra which incorporates wind, brass, and percussion instruments. Go to a concert at the Gallagher Bluedorn sometime and test it out for yourself!
You can also here different timbre when comparing two of the same instruments. Ironically, the goal of most musicians when playing together is to make them sound like one instrument! Sometimes this is achieved, and because of different timbre, it can be difficult to produce, as well as hear.
Terms: frequency, harmonics, overtones, inharmonics, timbre.
http://www.youtube.com/watch?v=BLoM9bBr8lc
http://www.youtube.com/watch?v=0C8uTh4PQLo
http://www.youtube.com/watch?v=9iMggGDgvlA&feature=related
You probably have more insight into this stuff than the rest of us!!! Thanks for sharing your experiences. Do you notice when a concert hall/ music venue has crappy acoustics? Does it bother you/change your experience of the event, or do you get over it pretty quickly? I would imagine it would be tough not to notice once you know what it is supposed to sound like.
In this chapter, as evidenced by my earlier post in this chapter, I was curious about the phenomenon of the missing fundamental. The sounds you perceive in everyday life are complex sounds, which mean that they are actually a collection of multiple sounds. Not only are they a grouping of tones, they are a pattern of tones. This pattern of tones is called a harmonic, and the lowest frequency tones is called the fundamental frequency. Other tones in this harmonic are multiples of the frequency of this fundamental frequency. They may seem redundant, but they actually provide fullness to the sound. Your ear receives each tone as individual, but your brain creates these as an entire sound, and it relies on the harmonic pattern to perceive the entire sound.
This pattern is so essential, that if we omit single tones from the harmonic, you wouldn’t even notice. To go even further, if we remove the fundamental frequency, the basis of the sound itself, you actually perceive the note as if nothing was missing. To me, this struck me as very strange; we can change the physical structure of a sound, even change the physical core of a sound, and you would perceive it as if nothing happened.
One common example of the missing fundamental is when you’re talking on the telephone. Most telephones cannot emit sound lower than 300Hz; however, the average fundamental frequency of a male voice is 150Hz. So when you hear a male speaker on the phone, you aren’t hearing his fundamental frequency (because the phone is incapable) you are hearing the integral harmonics of this base tone. Audio manufacturers actually produce systems that can produce sounds lower than what the sound it can physically produce by using the missing fundamental phenomenon.
Another interesting tidbit of information about the missing fundamental is that not everyone can perceive it. Some people hear the note that would be physically true if the fundamental frequency wasn’t omitted, but some people hear the physically true note, a note higher because they don’t perceive the phantom fundamental to be there. The reasons behind this is still unknown, maybe some genetic factor, or early exposure to missing fundamentals; but the fact remains, some people can hear the missing fundamental, and some people can’t.
Terms: missing fundamental, harmonic, fundamental frequency,
http://music.nebrwesleyan.edu/wtt/?page_id=1619
https://en.wikipedia.org/wiki/Missing_fundamental
http://homepage.ntu.edu.tw/~karchung/Phonetics%20II%20page%20thirteen.htm
Good example using the telephone situation. I didn't know that. Interesting.
Timbre is the concept that I chose to do further research on this week. Timbre could also be called tone quality or tone color in psychoacoustics. What timbre is responsible for is being able to tell musical sound apart from one another. This is unique because this happens even when the sounds have the same pitch and loudness. For example even if a guitar and a piano play the same note at the same level of loudness they are distinguishable.
Since all sounds are made up of a number different frequencies even two of the same instruments do not create the exact same sound. This happens because notes are complex tones. The sound that comes from a instrument consist of many different vibrations occurring at the same time. There is one vibration that is the slowest. It is called the fundamental frequency. This frequency is the loudest. The other frequencies are called harmonics, overtones or inharmonics. The ear can identify any tone with three or more harmonics.
The Harmonic content has the utmost importance to the timbre of a sound. Attack and decay are especially important. Attack is the rapid rise of a sound while decay is the opposite. If the attack of the sound of a certain instrument is taken away then it is much more difficult to identify the timbre. In talking about this researchers believe that it takes about 60 ms to recognize the timbre of a tone.
Key terms: Psychoacoustics, Timbre, attack, decay, harmonics, fundamental frequency
http://www.youtube.com/watch?v=BLoM9bBr8lc
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
http://en.wikipedia.org/wiki/Timbre
Right on.
I decided to further explore attack and decay. The decay is long and gradual by comparison. The ear is sensitive to these attack and decay rates and may be able to use them to identify the instrument producing the sound. Attack is the extent to which the note is hit. Rhythm is communicated through attack. Once hit, how long is the note allowed to sound? Decay is the time within the full duration of the note that the sound is made. Sound synthesis techniques often employ an envelope generator that controls a sound's parameters at any point in its duration. Most often this is an "ADSR" (Attack Decay Sustain Release) envelope, which may be applied to overall amplitude control, filter frequency, etc. The envelope may be a discrete circuit or module, or implemented in software. The contour of an ADSR envelope is specified using four parameters: Attack time is the time taken for initial run-up of level from nil to peak, beginning when the key is first pressed. Decay time is the time taken for the subsequent run down from the attack level to the designated sustain level.
Sustain level is the level during the main sequence of the sound's duration, until the key is released. Release time is the time taken for the level to decay from the sustain level to zero after the key is released.
Terms: attack and decay, ear, rhythm, synthesis, ADSR, aplitude control, filter frequency
Sources:
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timbre.html
http://www.musicarrangers.com/star-theory/t08.htm
http://en.wikipedia.org/wiki/Synthesizer
Crazy stuff. Interesting, indeed.
I chose to do my topical blog for chapter 10 on timbre. Timbre can be defined as the psychological sensation in which a listener can judge two sounds that have the same loudness and pitch are dissimilar. Timbre quality is conveyed by harmonics and other high frequencies. Words such as bright, dark, warm, harsh ect are often used to describe timbre of a sound. Timbre may also be referred to as tone quality and color.
The reason that different timbres occur is because each note by each instrument is a complex wave with different frequencies. A harmonic series occurs when the frequencies that instruments produce are a clear and specific pitch. A combination of how many frequencies you hear, their relationship to the fundamental pitch an how loud they are compared to each other create many different musical colors. A flute and a tuba for example would have very different tibres or colors.
Some analogies used to explain timbre would be two different instruments playing the same note at the same pitch, loudness and legnth. The two instruments would have different timbres that would allow one to distinguish them from one another. Timbre basically is the characteristics about a sound that make it distinguishable from other sounds that should be the same. So for example, that friend of yours that doesn’t have the prettiest voice can sing a note at the same exact pitch and loudness as a world-renowned singer and you are going to be able to tell the two apart because they have different timbres. Just like you are able to tell the flute from the oboe even if they were playing the exact same note. If you click on the third link it with bring you to a page where you can listen to many different instruments with very different timbres to better understand the differences between instruments, sounds and frequencies.
http://en.wikipedia.org/wiki/Timbre
http://cnx.org/content/m11059/latest/
http://www.youtube.com/playlist?list=PL267C9604691FD3D8
Pretty cool stuff. I like how we can distinguish between different instruments. Makes the variety of music much more interesting.
After reading the chapter on depth perception, I really wanted to focus and gain more knowledge on the definitions of pictorial depth cues. In everyday life we encounter motion and visual response to depth whether we are walking, running, driving, or any sort of movement. When I researched this topic, I found many examples of how I can now critically view motion in my daily life. 1. The first definitions I found was listing the following terms connecting with pictorial cues; aerial perspective, interposition, linear, relative height, relative size, shadowing, and texture.
These terms are part of pictorial cues that help us distinguish shapes, and objects in our environment whether it is far away or close up. Aerial Perspective is an example of how clear an object or image is to our vision. An example of this term would be if a person looks far away it seems blurrier than up close because it is not in our field of view. When an object is interposition cue it simply means that there is an overlap that you cannot see an object because it is partially covering another object further away.
A linear perspective is when parallel lines are observed to an individual when they are viewing a road lines or railroad. When you are observing these objects in every day line the line looks as if they are becoming closer in the distance. However, these objects are not. An example of two other cues are size and height when we are viewing an image or in our environment; if the image or object is small or appears large. The height difference of viewing depth is when things seem taller or shorter distance away. Shadowing is a cue that helps an individual view the angle and sharpness of objects and depth. The texture cue is very important to many people because when an image or object is in the distance you can not tell if it is tough or smooth until it is close.
The depth perception video demonstrates examples in everyday life such as driving and viewing cars and objects that shows binocular and monocular cues. 3D movies use retinal disparity to show two slightly different images at once and the glasses help each eye view the image. The brain then does the rest of the work using signals the convergence between the eyes to see how far it is or changes the lens in the eye to judge depth.
These cues all work together because of the brain sending signals to the visual cortex and eyes to view objects and images in our pathway.
http://education-portal.com/academy/lesson/depth-perception.html
http://www.ablongman.com/html/psychplace_acts/depth/pictoria.html
http://psychology.about.com/od/sensationandperception/f/monocular-cues.htm
vocab- monocular and binocular cues, retina, aerial perspective, interposition, linear, shadowing, texture, relative height and size
This week I decided to do my topical blog on the cone of confusion, ITD and ILD! I really like that these are things I experience in my everyday life and I decided to look into each one individually for this blog.
Interaural time difference or ITD is one component of sound localization (I didn't know this is what it was called before). I really liked this lecture guide because it was super easy to follow, just like sitting in a class! Sometimes, I don't follow as well reading the book and this article was helpful. ITD is the difference in time between sound arriving at one ear verses the other. I learned through this article about azimuth (this article explained it so much better) which is the angle as to which your head is turned toward or away from the sound. 0 degrees would mean that you are looking straight at the sound and you are hearing it straight on. I thought this was cool and helpful to explain the distance between sound waves and helped to describe how the direction of the sound is related to the time it takes to go to one ear verses the other.
http://psych.wfu.edu/psy329schirillo/lectures/Ch10Lecture.pdf
Next I decided to look at Interaural level difference in sound localization. I liked looking into this article because it focused solely on ILD instead of talking about sound localization as a whole. I thought this helped to give a mass amount of information about this one topic instead of trying to hit me with information I wasn't looking for. ILD is the difference in level (or loudness) of a sound between ears depending on the angle or distance of the sound from the ear. This article explained how ILD works with ITD to form the sound localization and make slight differences between each one of your ears. It's crazy to me that I knew all this stuff before but never knew it had a name.
http://psych.stanford.edu/~jlm/pdfs/HartmannConstan02LevelMeter.pdf
To my surprise, there really wasn't much information on the cone of confusion on the internet. I did find a good general definition through Wikipedia, however. I found it interesting that they don't define the cone of confusion as to when something is right in front of you or right behind you. They describe it as when the "cone" of sound from both ears is level with the other. This was cool to me because that means that people with hearing loss in one ear can have a balance different from someone with exceptional hearing in both ears. Not everyone has the same balance of sound into their "cones". The website also talked about how tilting your head can make the sounds change from cone to cone. I tried this while just sitting in my room and it's crazy how a slight tilt in the head can cause so much difference!
http://en.wikipedia.org/wiki/Sound_localization#The_cone_of_confusion