What I read
In "Artificial intelligence 'did not miss a single urgent case'" (2018), Fergus Walsh writes that expect artificial intelligence (AI) shows great promise for diagnosing more medical conditions, such as cancers, following the results of a test by Google's DeepMind, which proved itself as accurate as the world's best medical experts in detecting eye problems. Because of the algorithms it uses, this development in AI also avoids the "black box" problem, where it is unknown how the program reaches its results, which lessens trust in those results.
___________________________________
My response
I knew that AI was evolving rapidly, but this report still surprised me. Computers, which I think only came to notice around the Second World War, less than 100 years ago, with the work of people like the mathematician Alan Turing, have evolved rapidly, and their rate of development continues to accelerate. When I studied computer science at university back in the late 1970s, the computer (there was only one) took up a full room, and we had to type our programs on cards to feed them into the machine, and then wait for the output the next day! That university computer was less powerful than the one I now carry in my pocket. And the first computer I ever bought was a high-end laptop in 1994, which had a hard drive with 80 MB and a whopping 2MB of RAM - nothing today!
It sounds like science fiction, and probably still is, but given the rapid rate of development, I don't think it can be long before our machines are as intelligent as we are, and the day after that, they will be far more intelligent. I'm not sure that I want to think where they might be a week later.
The reference to the "black box" problem also interested me because it seems to me we have exactly the same problem with our own minds: we often do not know how our minds work, which is why the research discoveries of economists, psychologists and others can be so surprising, and seemingly counter-intuitive. And of course, we are not very good at predicting our own actions in the future, or even understanding ourselves very well in the present. When asked, most people think that they are more intelligent than average, better leaders than most, better informed than most people, more generous and so on than reason or the facts support. And as computers continue to become more complex and evolve ever more independently of humans, it seems to me inevitable that we won't be able to understand how they work. But I'm not sure how big a problem this is.
In the meantime, they are ever more sophisticated tools for us to use in ever more areas of our life. I just hope that when they are vastly more intelligent than we are, they treat us more nicely than we humans treat the less intelligent species on Earth.
It sounds like science fiction, and probably still is, but given the rapid rate of development, I don't think it can be long before our machines are as intelligent as we are, and the day after that, they will be far more intelligent. I'm not sure that I want to think where they might be a week later.
The reference to the "black box" problem also interested me because it seems to me we have exactly the same problem with our own minds: we often do not know how our minds work, which is why the research discoveries of economists, psychologists and others can be so surprising, and seemingly counter-intuitive. And of course, we are not very good at predicting our own actions in the future, or even understanding ourselves very well in the present. When asked, most people think that they are more intelligent than average, better leaders than most, better informed than most people, more generous and so on than reason or the facts support. And as computers continue to become more complex and evolve ever more independently of humans, it seems to me inevitable that we won't be able to understand how they work. But I'm not sure how big a problem this is.
In the meantime, they are ever more sophisticated tools for us to use in ever more areas of our life. I just hope that when they are vastly more intelligent than we are, they treat us more nicely than we humans treat the less intelligent species on Earth.
___________________________________
My question
Should we fear the increasing abilities of computers?
___________________________________
Reference
- Walsh, F. (2018, August 13). Artificial intelligence 'did not miss a single urgent case'. Retrieved from https://www.bbc.com/news/health-44924948
Only 86 words for this summary. This is actually after a major revision, which is normal when we write a summary, and of course I had to read the article several times before attempting my first summary.
ReplyDeleteIt is not surprising that many people concern about this issue when they see some breakthroughs involving AI on the news. I think one culprit could be many fiction movies plots, that show how dangerous robots could be when they become human's enemy. However, as far as I know, all these so-called AI are just very complex statistical models that are designed to automatically detect pattern from data. You can think of them as a particular function for some specific task, it just takes an input and produce an output, and that's it. They can't think by themselves or do any other tasks except the one they are created for. We called them AI because the tasks that they can do are so complex, even experts still need many years to train for these tasks.
ReplyDeleteAnyway, I don't mind if someone can invent a computer that can think, learn and speak like human, because that means we finally understand what the consciousness really is. Right now, it is still an unanswered question, so it is very unlikely that any computers or robots would do harm to human like we saw in fiction movies.