Wednesday, December 10, 2014

Last post!


Congratulations everyone! You've finished what I hope was a rewarding and educational experience for you. Many of you have produced excellent academic artifacts appropriate for University level work, and you should be proud of finishing such a colossal task.

Here are the last things you need to know.

1) Grades

All grades will be finished by December 17. We will not be giving any feedback unless there is a problem. If you have any questions, though, of course come and ask. If you're in Sam Teacher's class, your grades will look like this on your final draft post:





1D = 1st Draft Grade (2 Maximum)
2D = 2nd Draft Grade (3 Maximum)
R = Research Grade (5 Maximum)
F = Final Grade (5 Maximum)
T = Total Grade (15 Maximum)






If you're in Agnes Teacher's class, you can find your grades the same way you found your second draft grades, here.


2) Final Grammarly Reports


If you're curious to see your final Grammarly reports, you can find them here:

Sam Teacher's Classes
Agnes Teacher's Classes

3) Slight Change to the Grading Rubric (A good change!)

We slightly modified the final grading rubric to make it a bit easier. Nobody is perfect, so we changed the "technically perfect" requirement for 5 and 4 points. They now look like this:

5 points

  • Technically perfect Technically almost perfect
  • Follows the classical argument
  • Displays an interesting and unique perspective on a highly specialized topic
  • Thought provoking and captivating
  • Clearly the product of extensive drafting and research

4 points
  • Technically perfect Technically excellent
  • Follows the classical argument
  • Displays a unique perspective on a highly specialized topic
  • Clearly the product of thorough drafting and research


4) Please take this exit survey. 


Your honest feedback will mean a lot to your teachers. It is anonymous, and there are two pages. Please do both pages (You have to click "Continue" at the bottom)! Thanks!


Sunday, November 30, 2014

Week 16 Final Draft Y'all!


1) Finish your final draft by 11:59 pm December 7.
  1. Please notice, the deadline changed.
  2. Review your second draft feedback.
  3. Review the final draft rubric.
  4. Your final draft should look like this.
  5. If you submit late, you will receive a minus 1 point penalty.
  6. If you submit after December 10, you will receive a 0.
Guys you are almost done! I'm so impressed with your second drafts, and so proud of so many of you for showing commitment, creativity and integrity in your writing process. I hope you've enjoyed the process and learned skills you can take with you into the future. As you polish your final draft I want to point out some common problems that you should be careful of.

1) Style: Your essay should be written in formal style. That means:
  • Use third person (Not "I" or "You"). If you are telling a personal story, you can use I.
  • Do not use questions unless it is NECESSARY (It is almost never necessary).
  • Use fonts and colors that are easy to read.
  • Spelling, punctuation and grammar MATTER. Spend enough time to fix as many mistakes as you can. 
  • You use someone else's words
  • You put someone else's words into your own words (Paraphrasing)
  • You use any information you acquired from another source (Statistics, data)
  • Your reference list must only contain references to sources you cite in your text.
  • If you cite a source in your text, you must put it in your reference list.
  • If you use a link in your final draft, you must also use APA style to cite it.
  • Did you get peer feedback? If your peers did not give you thoughtful feedback, tell me and we will solve that problem. 
4) Visual aids and interactivity
  • If you want, include relevant photos, graphics or links. Remember that you want your paper to appear professional, though. Check out 백해진s second draft for a good example of using graphics. Just remember, you need to explain their significance and cite them in the correct APA style.

Final Draft Example


Brains Beat Binary

Everyday we marvel at the power of our tiny computing devices - a phone that knows if it is in a purse or not, a watch that tracks your calendar, or shoes that help you exercise optimally by measuring your heart rate. However, we usually forget perhaps the most amazing computing device we all have - our brains. Perhaps computers can beat the brain in certain, limited computational tasks, but the overall performance of the brain out-performs every man made object to date. Think about it - Your brain keeps you balanced while you walk, helps you decide when and what to eat, regulates your emotions and allows for all art and culture ever produced. Actually, you couldn't even think about this without your brain, a task no computer could accomplish. Although computers are constantly evolving, they will never be as powerful as the human brain.

Since Samuel Butler first expressed his fears of the rapid development of machinery, humans have fixated on this imagined future where we are enslaved or worse by our own creations. Considering he said, "There is no security against the ultimate development of mechanical consciousness," in 1906, before computers, his words proved surprisingly salient (Butler, 1906). Despite the fact that computers are indisputably growing in terms of computational power at an incredible rate (Moore, 1998), there is no reason to realistically fear a computer that can work autonomously in any meaningful way, let alone outsmart a human. Consider Microsoft's Project Adam. The software can sort and organize millions of photos quickly and accurately by analyzing the images. It can distinguish between extremely similar looking breeds of dogs in a photo, for example (Microsoft Research, 2014). Practically this means you might one day be able to do an image search for a sweater you want, oh, say, "a mauve cashmere sweater with 3/4 lengths sleeves," and without any cumbersome text based tagging or sorting the search engine will analyze every single image on the internet and parse out all the cashmere sweaters that aren't mauve or have full sleeves. An impressive accomplishment. This represents one of the most incredible achievements of practical computing today. However, even this breakthrough in computing technology does nothing to narrow the gap between computer and human intelligence. The technology cannot operate independently of human involvement. The technology is still responding entirely to human based input, on human based instructions with human programmers and human technology feeding it, like electricity or data from the internet.

If we examine the thinkers that predict a world of computers thinking on a human level we encounter a mostly deluded camp of sci-fi lovers who base their theories on Star Wars inspired fantasy more than any facts. Even the serious and respected thinkers, like Ray Kurzweil, Google's "futurologist" (Even the title invites mockery, doesn't it?) have questionable motives when they make predictions about computers that think like people in 15 years. The existence of his job relies on the hope that one day computers can reach that level. Similarly, Kurzweil's reputation would suffer if the idea that computer's will match our thinking power became common place. Certainly The Guardian would be less interested in him (Cadwalladr, 2014).

Artificial intelligence, and the abiding fears of computer-powered dominion over humans, are common place and popular fodder for idle discussion. However, when considered in reality these fears are misguided, and the hope of a computer as smart as a human is absurd.

Perhaps one of the challenges to adequately discussing this topic is the difficulty of defining the human brain in a way that can be compared to a computer so as to compare the power of the two. Let's first look at the human brain through a terrible lens, and one that sci fi concepts seem to constantly attribute to computers: the power to destroy. Perhaps the unique human ability to war and fight at a level unique to our species (Dolphins, as predatory and scary as they may be (Goldstein, 2009), will never launch a mortar barrage against an enemy pod or engage in genocide.) so will robots ever reach this uniquely human metric? Computer science professor at the University of Sheffield, England Noel Sharkey says no. "They are just computer systems... the only way they can become dangerous is when used in military applications." (Tan, n.d.) To Sharkey, robots and artificial intelligence have the greatest growth potential in toy markets, a strong indication of the potential for nefariousness he sees in future computing technology. He goes on to point out that the largest developments in robotics come not from software, but from their hardware. Robots that can walk or navigate difficult terrain seem to be the new trend for robots mimicking human behavior.

An article from Vox.com makes an interesting case about why computers will never be able to match human intelligence, "A computer program has never grown up in a human family, fallen in love, been cold, hungry or tired, and so forth. In short, they lack a huge amount of the context that allows human beings to relate naturally to one another." (Lee, 2014). Basically, the argument is that even if a computer can match our brain's computational power (A very far off and unlikely possibility (Myers 2010), it will never be able to pass as a human because it lacks the experiences that really create our humanity. Or, in other words, humans are so much more than our brain power - we are the products of our upbringings. Our tenacity, will, passions and dreams all come from the sum of our experiences, not how fast we think. Because of that, computers will not be able to function at a human level of creativity or character, even if they can eventually perform more calculations per second than we can.

Actually, this supposition stems from a famous scenario from philosopher John Searle in the 1980s (The Chinese Room, 2004). He proposed that an Englishman with no knowledge of Chinese, if locked in a room with an instruction manual for reading and writing Chinese characters, could successfully interpret and respond to messages passed under the door to him from a native Chinese speaker on the outside of the room. Theoretically, given enough time, the Englishman could respond so accurately that the native Chinese speaker would be sure that she was in fact corresponding with another native Chinese speaker. Essentially, the Englishman would have passed himself off as a Chinese person with no contextual understanding of what it means to be Chinese. The extension of the argument into artificial intelligence is that even if we create a computer that can mimic and interact with humans so convincingly that we believe we are conversing with a real human, that machine will not be human because it lacks the contextual understandings of humanity.

Whether we define the brain by what it produces (In this paper I discussed the example of war, but many other examples would suffice, art or romance, for example), or in terms of raw computational power or how the experiences that mold each molecularly similar brain into such unique masterpieces the conclusion remains the same: Any computer, no matter how powerful or well conceived, can approach a human level of thought or existence.

It is not hard to find sources that will warn you of the coming robot apocalypse or singularity that will render humans obsolete, either in entertainment - the Matrix or Terminator series - or legitimate science - Ron Kurzweil and the whole school of futuroligists. In part, I agree; computers and technology are capable of terrifying acts of destruction and cold inhumanity. What is important to remember, though, is that none of these acts are possible without human provocation, and the sometimes-scary lifelessness of computers is really only as scary as the lifelessness of a vacuum cleaner or screw driver. In short, they’re tools: Incredibly powerful, important and relied-upon tools, but still just tools. If we ever limit the expansion of technology, we will cost ourselves advances in medicine, food, water and air purification, clean energy developments and crisis management solutions. It is not an exaggeration to say that technological advances save lives when used responsibly. Instead of looking at technology suspiciously, we need to consider it from the perspective of, “How can we use this technology? How can we develop it to better serve our needs?” Like Prometheus surely scared his friends by wielding fire, we will no doubt earn criticism and condemnation for allowing and encouraging the pursuit of new technologies. But, like Prometheus, it will be easy to ignore those criticisms with a full belly - or a robot hygienist meticulously disinfecting our whole house, as the case may be.

References

Butler, Samuel (1906). Erewhon, or Over the Range. Retrieved from     

     http://www.gutenberg.org/files/1906/1906-h/1906-h.htm 
Cadwalladr, Carole. (2014, February 22). Are the Robots About to Rise? 
     Google's New Director of   Engineering Thinks So... . 
     The Guardian. Retrieved from 
     http://www.theguardian.com/technology/2014/feb/22/robots-google-
     ray-kurzweil-terminator-singularity-artificial-intelligence
Goldstein, Miriam. (2009, May 13). The Dark Secrets 
     That Dolphins Don't Want You to Know. Slate. Retrieved from      
     http://www.slate.com/blogs/xx_factor/2009/05/13/
     dolphins_are_violent_predators_that_kill_their_own_babies.html
Lee, Timothy B. (2014, August 22). Will artificial intelligence destroy 
     humanity? Here are 5 reasons not to worry. Vox. Retrieved from 
     http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-
     worry-about-super-intelligent-computers-taking
Microsoft Research. (2014, July 14). On Welsh Corgis, Computer 
     Vision, and the Power of Deep Learning. Microsoft Research. 
     Retrieved from http://research.microsoft.com/en-us/news/features/
     dnnvision-071414.aspx
Moore, Gordon E. (1998). Cramming more components onto integrated 
     circuits. Proceedings of the IEEE. 86. 1. Retrieved from     
     http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf
Myers, PZ. (2010, August 17). Ray Kurzweil Does Not Understand the 
     Brain. Science Blogs. Retrieved from http://scienceblogs.com/
     pharyngula/2010/08/17/ray-kurzweil-does-not-understa/
Tan, Lay Leng. (n.d.). Inteligent-less. Innovation: The Singapore Magazine 
     of Research, Technology and Education. 6. 1. Retrieved from 
     http://www.innovationmagazine.com/innovation/volumes/
     v6n1/feature3.shtml
The Chinese Room Argument. (2004, March 19). In Stanford 
     Encyclopedia of Philosophy. Retrieved from 
     http://plato.stanford.edu/entries/chinese-room/

Thursday, November 27, 2014

Schedule Change - Final Draft Due December 7



Hey guys,

I have some very important news - The deadline for the final draft has been moved up to December 7 at 11:59 pm. December 14 is too close to your final exams, so this way you can focus on preparing for those exams.

If you submitted your second draft on time, you will receive feedback by Sunday at 11:59 pm. Most of you have done a very good job with your second drafts and should be able to write an excellent final draft by the new deadline.

Like always, please bring questions to me, Agnes T or your Korean teachers.

Monday, November 24, 2014

APA Citations Example

Brains Beat Binary

Everyday we marvel at the power of our tiny computing devices - a phone that knows if it is in a purse or not, a watch that tracks your calendar, or shoes that help you exercise optimally by measuring your heart rate. However, we usually forget perhaps the most amazing computing device we all have - our brains. Perhaps computers can beat the brain in certain, limited computational tasks, but the overall performance of the brain out-performs every man made object to date. Think about it - Your brain keeps you balanced while you walk, helps you decide when and what to eat, regulates your emotions and allows for all art and culture ever produced. Actually, you couldn't even think about this without your brain, a task no computer could accomplish. Although computers are constantly evolving, they will never be as powerful as the human brain.


Since Samuel Butler first expressed his fears of the rapid development of machinery, humans have fixated on this imagined future where we are enslaved or worse by our own creations. Considering he said, "There is no security against the ultimate development of mechanical consciousness," in 1906, before computers, his words proved surprisingly salient (Butler, 1906). Despite the fact that computers are indisputably growing in terms of computational power at an incredible rate (Moore, 1998), there is no reason to realistically fear a computer that can work autonomously in any meaningful way, let alone outsmart a human. Consider Microsoft's Project Adam. The software can sort and organize millions of photos quickly and accurately by analyzing the images. It can distinguish between extremely similar looking breeds of dogs in a photo, for example (Microsoft Research, 2014). Practically this means you might one day be able to do an image search for a sweater you want, oh, say, "a mauve cashmere sweater with 3/4 lengths sleeves," and without any cumbersome text based tagging or sorting the search engine will analyze every single image on the internet and parse out all the cashmere sweaters that aren't mauve or have full sleeves. An impressive accomplishment. This represents one of the most incredible achievements of practical computing today. However, even this breakthrough in computing technology does nothing to narrow the gap between computer and human intelligence. The technology cannot operate independently of human involvement. The technology is still responding entirely to human based input, on human based instructions with human programmers and human technology feeding it, like electricity or data from the internet. 

If we examine the thinkers that predict a world of computers thinking on a human level we encounter a mostly deluded camp of sci-fi lovers who base their theories on Star Wars inspired fantasy more than any facts. Even the serious and respected thinkers, like Ray Kurzweil, Google's "futurologist" (Even the title invites mockery, doesn't it?) have questionable motives when they make predictions about computers that think like people in 15 years. The existence of his job relies on the hope that one day computers can reach that level. Similarly, Kurzweil's reputation would suffer if the idea that computer's will match our thinking power became common place. Certainly The Guardian would be less interested in him (Cadwalladr, 2014).

Artificial intelligence, and the abiding fears of computer-powered dominion over humans, are common place and popular fodder for idle discussion. However, when considered in reality these fears are misguided, and the hope of a computer as smart as a human is absurd. 

Perhaps one of the challenges to adequately discussing this topic is the difficulty of defining the human brain in a way that can be compared to a computer so as to compare the power of the two. Let's first look at the human brain through a terrible lens, and one that sci fi concepts seem to constantly attribute to computers: the power to destroy. Perhaps the unique human ability to war and fight at a level unique to our species (Dolphins, as predatory and scary as they may be (Goldstein, 2009), will never launch a mortar barrage against an enemy pod or engage in genocide.) so will robots ever reach this uniquely human metric? Computer science professor at the University of Sheffield, England Noel Sharkey says no. "They are just computer systems... the only way they can become dangerous is when used in military applications." (Tan, n.d.) To Sharkey, robots and artificial intelligence have the greatest growth potential in toy markets, a strong indication of the potential for nefariousness he sees in future computing technology. He goes on to point out that the largest developments in robotics come not from software, but from their hardware. Robots that can walk or navigate difficult terrain seem to be the new trend for robots mimicking human behavior.

An article from Vox.com makes an interesting case about why computers will never be able to match human intelligence, "A computer program has never grown up in a human family, fallen in love, been cold, hungry or tired, and so forth. In short, they lack a huge amount of the context that allows human beings to relate naturally to one another." (Lee, 2014). Basically, the argument is that even if a computer can match our brain's computational power (A very far off and unlikely possibility (Myers 2010), it will never be able to pass as a human because it lacks the experiences that really create our humanity. Or, in other words, humans are so much more than our brain power - we are the products of our upbringings. Our tenacity, will, passions and dreams all come from the sum of our experiences, not how fast we think. Because of that, computers will not be able to function at a human level of creativity or character, even if they can eventually perform more calculations per second than we can. 


Actually, this supposition stems from a famous scenario from philosopher John Searle in the 1980s (The Chinese Room, 2004). He proposed that an Englishman with no knowledge of Chinese, if locked in a room with an instruction manual for reading and writing Chinese characters, could successfully interpret and respond to messages passed under the door to him from a native Chinese speaker on the outside of the room. Theoretically, given enough time, the Englishman could respond so accurately that the native Chinese speaker would be sure that she was in fact corresponding with another native Chinese speaker.  Essentially, the Englishman would have passed himself off as a Chinese person with no contextual understanding of what it means to be Chinese. The extension of the argument into artificial intelligence is that even if we create a computer that can mimic and interact with humans so convincingly that we believe we are conversing with a real human, that machine will not be human because it lacks the contextual understandings of humanity.

Whether we define the brain by what it produces (In this paper I discussed the example of war, but many other examples would suffice, art or romance, for example), or in terms of raw computational power or how the experiences that mold each molecularly similar brain into such unique masterpieces the conclusion remains the same: Any computer, no matter how powerful or well conceived, can approach a human level of thought or existence.

It is not hard to find sources that will warn you of the coming robot apocalypse or singularity that will render humans obsolete, either in entertainment - the Matrix or Terminator series - or legitimate science - Ron Kurzweil and the whole school of futuroligists. In part, I agree; computers and technology are capable of terrifying acts of destruction and cold inhumanity. What is important to remember, though, is that none of these acts are possible without human provocation, and the sometimes-scary lifelessness of computers is really only as scary as the lifelessness of a vacuum cleaner or screw driver. In short, they’re tools: Incredibly powerful, important and relied-upon tools, but still just tools. If we ever limit the expansion of technology, we will cost ourselves advances in medicine, food, water and air purification, clean energy developments and crisis management solutions. It is not an exaggeration to say that technological advances save lives when used responsibly. Instead of looking at technology suspiciously, we need to consider it from the perspective of, “How can we use this technology? How can we develop it to better serve our needs?” Like Prometheus surely scared his friends by wielding fire, we will no doubt earn criticism and condemnation for allowing and encouraging the pursuit of new technologies. But, like Prometheus, it will be easy to ignore those criticisms with a full belly - or a robot hygienist meticulously disinfecting our whole house, as the case may be.

References

Butler, Samuel (1906). Erewhon, or Over the Range. Retrieved     
           from http://www.gutenberg.org/files/1906/1906-h/1906-h.htm 
Cadwalladr, Carole. (2014, February 22). Are the Robots About to Rise? 
            Google's New Director of Engineering Thinks So... . The Guardian
            Retrieved from http://www.theguardian.com/technology/2014/feb/22/
            robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence
Goldstein, Miriam. (2009, May 13). The Dark Secrets That Dolphins 
            Don't Want You to Know. Slate. Retrieved from            
            http://www.slate.com/blogs/xx_factor/2009/05/13/
            dolphins_are_violent_predators_that_kill_their_own_babies.html
Lee, Timothy B. (2014, August 22). Will artificial intelligence destroy humanity? 
            Here are 5 reasons not to worry. Vox. Retrieved from 
            http://www.vox.com/2014/8/22/6043635/
            5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking
Microsoft Research. (2014, July 14). On Welsh Corgis, Computer 
            Vision, and the Power of Deep Learning. Microsoft Research. Retrieved 
            from http://research.microsoft.com/en-us/news/features/
            dnnvision-071414.aspx
Moore, Gordon E. (1998). Cramming more components onto integrated 
            circuits. Proceedings of the IEEE861. Retrieved from       
            http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf
Myers, PZ. (2010, August 17). Ray Kurzweil Does Not Understand the Brain. 
            Science Blogs. Retrieved from http://scienceblogs.com/pharyngula/
            2010/08/17/ray-kurzweil-does-not-understa/            
Tan, Lay Leng. (n.d.). Inteligent-less. Innovation: The Singapore Magazine 
            of Research, Technology and Education. 6. 1. Retrieved    
            from http://www.innovationmagazine.com/innovation/
            volumes/v6n1/feature3.shtml
The Chinese Room Argument. (2004, March 19). In Stanford Encyclopedia 
            of Philosophy. Retrieved from http://plato.stanford.edu/entries/chinese-room/

Week 15 APA Citations

There are three styles for academic writing - MLA, APA and Chicago. We use the APA style because we are working in the social sciences (Most of your essays are about social relationships or human behavior). Here is detailed information about the APA formatting style. NOTE You don't need to follow all these rules. You DO have to cite your sources according to APA rules. 



*REMEMBER: 
If you plagiarize ANY part of your final draft, 
you will receive a 0/5 for the final draft.*

This week is all about avoiding plagiarism. You can avoid plagiarism by properly citing your sources. That means giving credit to the source of ideas and quotes.
  1. Read this blog post.
  2. Look at my example second draft with correct citations.
  3. Fix your own citations using the APA Style Guide from Purdue's The OWL.
A citation is an attribution. An attribution is an acknowledgement of someone's work. Sam Landfried said, "Don't plagiarize!" Is citing Sam Landfried.



You need citations when you:
  1. use a quotation.
  2. paraphrase.
  3. use statistics or data.
  4. use any information from someone besides you.
When you use someone else's source, you must stay true to their intent. I mean, if I say, "Plagiarism is a terrible thing and I hope you never do it," do not edit that quote to
Sam Landfried says, "Plagiarism is a ... thing and I hope you ... do it." 
That would be very dishonest. For example, in my second draft, I wrote this paragraph:
Or, to look at an example of cutting edge technology trying more directly to mirror the power of the human brain, let's consider the Human Brain Project's effort to recreate the human brain's neural network by networking millions of computers. Their hope is that one day the network will be so sophisticated that it will have the same plasticity and power of a human brain. Even though there are real people with real plans to accomplish this, on their own website they acknowledge how unfeasible this project is in reality, and why even if it is created it will not really rival human brain power. First, the power consumption of their current model is more than an obstacle, it is a concrete barrier. The technology required would require hundreds of millions of times the power of the human brain. That means that to power one single hypothetical brain, it would require the entire power production of several small countries combined - for one "brain". 
I decided, though, that I was misrepresenting this source. I was dishonest about what the website said. So, I removed the paragraph and source from my final draft. Make sure your own essays reflect your sources accurately and honestly.


Let's look at some examples of acceptable citation. In each example the source is Sam Landfried:
  1. Quotation:
    Sam Landfried says, "Don't plagiarize!"
  2. Paraphrase:
    Sam Landfried says plagiarism is a serious academic crime. 
  3. Statistics:
    According to Sam Landfried, 0% of people who plagiarize their final draft will receive credit.
  4. Other times:
    Many popular opinions, including that of Sam Landfried, thinks plagiarism is easy to avoid, so those who plagiarize are just lazy. 
There is an alternative form of citation called "parenthetical" citations. They look like this:
Plagiarism is dishonest, unethical and will earn you a 0 on your final draft (Landfried, 2014). 
Both forms of citation are acceptable. At the end of your article, you need to include your list of references. Every source you cited in your text must be on the list of references. If a source is not cited in your text, do not put it on your references. Here are the detailed instructions for how to write your reference page (It will replace your bibliography). It will go at the end of your essay. List your sources in alphabetical order. It looks like this:

References

Landfried, Samuel. (2014, November 25). Week 15 APA Citations [Blog post]. Retrieved                     from http://samteachersperformancetest.blogspot.com/2014/11/week-15-apa-            
            citations.html





Thursday, November 20, 2014

Grammarly Reports for Second Draft

Kim Jinyoung Teacher has done it again! Here are your Grammarly reports for the second draft.

Also, please, in your final draft, give your essay a title.

Good work y'all!