Perhaps I'm behind the times, but the field of 'machine learning' is all the rage these days. I only purport to know what it's all about from simple definitions found on the internet.
What I do understand is that 'Machine Learning' is a sub-field in the broad world of what's termed artificial intelligence. Using tools to teach artificial machines to automatically learn and improve their experiential knowledge based on collections of data sounds exciting and promising.
What I do understand is that 'Machine Learning' is a sub-field in the broad world of what's termed artificial intelligence. Using tools to teach artificial machines to automatically learn and improve their experiential knowledge based on collections of data sounds exciting and promising.
But do we really know how humans reason? At best, what we have are models of how humans are supposed to think intelligently. And perhaps more correctly, research has a model(s) of how a sub-set of humans from this planet are supposed to think 'intelligently' and make decisions on a daily basis. In other words, everything we know about what humans know about intelligent thinking is from a pool of subjects that volunteer to participate in research. Is my thinking far fetched?
Now, do humans need formal rules to make inferences? If Carly knows that chicken pox is associated with dark spots on the skin and that Jim has dark spots, she infers that Jim might have chicken pox. Did this conclusion require logic? No. It is entirely possible Carly used the content of the sentences to make a deduction, to imagine possibilities.
The news media lately has been filled with humans trying to understand 'difficult, complex' topics, topics we have no precedent to learn from or use to navigate to a solution.
For instance, Brexiteers have little clue how to get Britain out of the European Union without incurring a series of dark uncertainties few really know about. Flight accident investigators scramble for answers how airplanes, an electronic 'thinking' machine made by humans, nose dived twice into the ground killing over 300 people in two separate instances less than 6 months apart. Separately, safety experts sing positive songs over completely automating speed limits in cars by 2020. We want to try and wrest control out of the human being, because ... it must be exciting.
Others look for clues on the ground explaining the precise moments of a meteor impact that apparently led to the disappearance of dinosaurs. This is another interesting piece of development and I wonder whether any machines were truly involved in this study. Why would you need a machine to study this issue anyway?
For instance, Brexiteers have little clue how to get Britain out of the European Union without incurring a series of dark uncertainties few really know about. Flight accident investigators scramble for answers how airplanes, an electronic 'thinking' machine made by humans, nose dived twice into the ground killing over 300 people in two separate instances less than 6 months apart. Separately, safety experts sing positive songs over completely automating speed limits in cars by 2020. We want to try and wrest control out of the human being, because ... it must be exciting.
Others look for clues on the ground explaining the precise moments of a meteor impact that apparently led to the disappearance of dinosaurs. This is another interesting piece of development and I wonder whether any machines were truly involved in this study. Why would you need a machine to study this issue anyway?
News stories show the complexities behind real learning, real decision making. Can machines really imagine possibilities using content and 'meanings' behind that might lead to reasonings based outside logic? And do we know enough of how humans make meaning to data in examples not needing logic before we take it as a given that machines can 'learn' the same things too, if we only force them to think in certain ways. Are explorations in these two fields - human learning, and machine learning, going in parallel and feed into each other?
What do we not know about humans that we don't put into machines, which eventually might lead to the creation of what essentially are incomplete models of humans?
We try to mimic decision making in 'artificial intelligence' based on a limited set of knowledge we have about humans. The biases in that knowledge forms the underbelly of 'machine intelligence' we will have in our transportation systems, our appliances, and perhaps even in the robot that will help deliver your baby tomorrow. Aldoux Huxley's 'brave new world' is really an uncertain world.
* * *
1 comment:
Knowledge and wisdom can be achieved in ways we humans cannot imagine. Our approach in this field is mostly human-centric. Machines can arrive to a similar destination using different means. For example their analytical skills can improve by monitoring gargantuan - unthinkable for humans - amount of data, without checking the quality. Machines can transfer "experiences" or in some way knowledge instantly accross their network and equally for all members. These are attributes that humans have difficulty in comprehending since our limited existance and networking capabilities pose definite borders. The same applies for the rest of animal kingdom. We tend to take as a norm the human way of thinking and of acquiring wisdom, while there are other ways as well.
Post a Comment