

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time. Review: Navigating the Future of AI: A Thoughtful and Urgent Call to Action - âSuperintelligence: Paths, Dangers, Strategies" by Nick Bostrom is an intellectually stimulating and thought-provoking book that delves into the future of artificial intelligence (AI) and its potential impact on humanity. Bostrom, a philosopher at Oxford University, presents a meticulously researched and well-argued case for the possibilities and risks that superintelligent entities might bring. The book stands out for its rigorous analysis and balanced perspective. Bostrom carefully navigates the reader through various scenarios where AI surpasses human intelligence, discussing both the transformative benefits and the existential risks. His writing style is scholarly yet accessible, making complex ideas about AI ethics, future forecasting, and strategic planning understandable to a broad audience. One of the most compelling aspects of the book is its exploration of the 'control problem' - how humans could control entities far smarter than themselves. Bostrom does not shy away from the challenging philosophical and technical issues this problem presents. He also emphasizes the importance of preparatory work in AI safety research, encouraging proactive measures rather than reactive. However, some readers might find the level of detail and theoretical nature of the discussions somewhat daunting. The book demands attentiveness and a willingness to engage with deeply philosophical and technical content. Additionally, while Bostrom presents a wide array of possibilities, the book sometimes leans more towards speculative thought than practical solutions. "Superintelligence: Paths, Dangers, Strategies" is a seminal work in the field of AI and an essential read for anyone interested in the future of technology and its implications for humanity. Bostrom's thorough approach offers valuable insights and raises critical questions that will shape the ongoing conversation about AI and our future. Review: Very much worth reading despite some issues with the writing - This book makes some very important points about the existential risks superintelligence would pose to humanity without going off the deep end with unrealistic doomsaying or conspiracy theories. The reasoning is logical and reasonably well laid out. The book is about more than just artificial superintelligence (although that's the main topic) and discusses a variety of ways other than AI that superintelligence might be achieved. It also goes into how to mitigate the threat superintelligence poses in some detail. The main flaw in the book is that occasionally the writing itself isn't that great, either being poorly structured or awkwardly worded... it would have benefited from one more editorial scrub to smooth out the prose. Still, well worth reading!





| Best Sellers Rank | #47,616 in Kindle Store ( See Top 100 in Kindle Store ) #13 in Artificial Intelligence & Semantics #22 in AI & Semantics |
J**S
Navigating the Future of AI: A Thoughtful and Urgent Call to Action
âSuperintelligence: Paths, Dangers, Strategies" by Nick Bostrom is an intellectually stimulating and thought-provoking book that delves into the future of artificial intelligence (AI) and its potential impact on humanity. Bostrom, a philosopher at Oxford University, presents a meticulously researched and well-argued case for the possibilities and risks that superintelligent entities might bring. The book stands out for its rigorous analysis and balanced perspective. Bostrom carefully navigates the reader through various scenarios where AI surpasses human intelligence, discussing both the transformative benefits and the existential risks. His writing style is scholarly yet accessible, making complex ideas about AI ethics, future forecasting, and strategic planning understandable to a broad audience. One of the most compelling aspects of the book is its exploration of the 'control problem' - how humans could control entities far smarter than themselves. Bostrom does not shy away from the challenging philosophical and technical issues this problem presents. He also emphasizes the importance of preparatory work in AI safety research, encouraging proactive measures rather than reactive. However, some readers might find the level of detail and theoretical nature of the discussions somewhat daunting. The book demands attentiveness and a willingness to engage with deeply philosophical and technical content. Additionally, while Bostrom presents a wide array of possibilities, the book sometimes leans more towards speculative thought than practical solutions. "Superintelligence: Paths, Dangers, Strategies" is a seminal work in the field of AI and an essential read for anyone interested in the future of technology and its implications for humanity. Bostrom's thorough approach offers valuable insights and raises critical questions that will shape the ongoing conversation about AI and our future.
R**T
Very much worth reading despite some issues with the writing
This book makes some very important points about the existential risks superintelligence would pose to humanity without going off the deep end with unrealistic doomsaying or conspiracy theories. The reasoning is logical and reasonably well laid out. The book is about more than just artificial superintelligence (although that's the main topic) and discusses a variety of ways other than AI that superintelligence might be achieved. It also goes into how to mitigate the threat superintelligence poses in some detail. The main flaw in the book is that occasionally the writing itself isn't that great, either being poorly structured or awkwardly worded... it would have benefited from one more editorial scrub to smooth out the prose. Still, well worth reading!
E**E
A good overview of issues in an age of AI
I picked this book up because I have a kid at CalTech majoring in AI programming, machine learning. He seems to see only upside, no real concerns about 99% of the population being put put of work, and what I believe is inadequate apprehension about what could go wrong. Mom is a huge fan of Stephen Hawking and he was more than a bit apprehensive about the potential problems with self-learning machines. Most of the books and articles I have read on the topic are cursory or naieve. Nick Bostrom's book is fairly comprehensive and in depth. I am enjoying it as much as an excellent read in philosophy of science., as I am for his expanding the boundaries of the conversation, indeed, broaching it in many areas. I honestly do not know whether he says everything which needs to be said, but he has clearly thought it through and done a good deal of exploring, consulting, conversing ,collaborating. It is far and away the best book I have read on the topic {though there are some good pieces in MIT Technology Review as well). This is a book which is important and timely. We must seriously consider and weigh the potential for harm as well as good before creating a monster. While there may be areas which he has missed, I feel that when I read about a brute force approach to building human level AI by recreating a brain at the quantum level using Schrodinger's equation, the man is clearly pushing the boundaries. If nothing else it is a very good start to an important conversation. I picked this up because I was considering sending a copy to my son, but read it first because he is a busy guy and chooses his side reading carefully. There are books and articles I might mention or even recommend, and others I tell him not to waste his time on, this is one I will be sending him {though I would be very very surprised if someone at Cal Tech did not broach ...all of what is contained here). I will let him determine if it is redundant. It is well written and thorough, and also very approachable. He says in the prologue that overly technical sections may be skipped without sacrificing any meaning. I have not encountered one I needed to skip, and have, in fact, very much enjoyed the level of discourse. Read it if you are in the field to make sure you are covering all the bases. Read it if you are a scientist, philosopher, engineer to enjoy some very good writing. Read it if you are just encountering AI and want to quickly get to speed on the issues. It is not only a book I would recommend, but have, to anyone who would listen ;)
M**R
Sometimes friends over a beer philosophizing, sometimes clever analogies
I love the general idea of evaluating the potential perils of artificial super intelligence, and I buy into the concept of thinking this through at an abstract level, not tied to the current state of AI algorithms in today's computer science. That's what this book does - systematically explore every branch of a pretty large decision tree around everything that could or could not happen when an artificial intelligence starts developing super-intelligence, and how we should deal with it. So, conceptually cool. But practically, in the case of this book, not very interesting. For a couple of reasons. First, the level of abstraction really is taken to an extreme. Forget about any relation between arguments in this book and anything we've actually been able to do in AI research today. You won't find a discussion of a single algorithm or even exploration of higher-level mathematical properties of existing algorithms in this book. As a result, this book could have been written 30 years ago, and its arguments wouldn't be any different. Fine, I guess (the author after all is a philosophy professor, not a computer scientist); but I found this lacking at times. It gets particularly boring when the author actually does spend pages over pages on introducing a framework on how our AI algorithms could improve (through speed improvement, or quality improvement, etc.) - but still doesn't tie it to anything concrete. If you want to take the abstraction high road, just dispense with super generalized frameworks like this altogether and get to the point. Similar to the discussion of where the recalcitrance of a future AI will come from, whether from software, content or hardware: purely abstract and speculative, even though there are real-world examples of hardware evolution speed outpacing software design speed and the other way around (e.g., the troubles of electronic design automation keeping up with Moore's Law). Second, even if you operate fully in the realm of speculation, at least make that speculation tangible and interesting. A list of things an AI could be good at lists stuff like "social persuasion" (= convince governments to do something, and hack the internet). Struck me a lot of times as the kind of ideas you'd come up with if you thought about a particular scenario for a few minutes over a beer with friends. Very few counterintuitive ideas in there. One chapter grandly announces the presentation of an elaborate "takeover scenario", i.e., how would a superintelligence actually take over the world - and again it remains completely abstract and not original or practical. ("AI becomes smart, starts improving itself, takes over the world" - couldn't have guessed it myself.) Third, a lot of the inferences in the book struck me as nothing more than one-step inferences, making it a relatively shallow brainstorming-type book. ("This could happen, and also this other thing could happen, and this third thing as well.") Systematic exploration of a large decision tree gets interesting when you start combining lots of different scenarios in counter-intuitive ways. Again the "friends over a beer" problem. At times the philosophizing in some chapters reads like a mildly interesting Star Trek episode (such as the one about how to best set goals for an AI so that it acts morally and doesn't kill us). In the best and worst ways. But every now and then, there's a clever historical analogy, and an interesting idea. Ronald Reagan wasn't willing to share the technology on how to efficiently milk cows, but he offered to share SDI with the USSR - how would AI be shared? Or, the insight that the difference between the dumbest and smartest human alive is tiny on a total intelligence scale (from IQ 75 to IQ 180) - and that this means that an AI would likely look to humans as if it very suddenly leapt from being really dumb to unbelievably smart and bridge this tiny human intelligence gap extremely quickly. But what struck me with regards to the best ideas in the book is that the book almost always quotes just one guy, Eliezer Yudkovsky... which made me think that if I wanted to read a thought-provoking, counter-intuitive book on AI super intelligence (as opposed to a treatise that appears to at times gloss over the shallowness of its ideas by making up with long text), I should just go and read Yudkovsky. All in all though, the topic itself is so interesting that it's worth giving the book a try.
J**R
Interesting for anyone, but a must-read for all AI researchers
The author has obviously put a huge amount of thought into this topic. The number of angles he considers in terms of implementation timelines, methodologies, pros and cons for each, likelihood of the success of different methodologies over various timeframes, are impressive. For example, in discussing the various ways in which AI might be implemented, he concludes that AI (and subsequently, super-intelligent AI) via whole brain emulation is essentially guaranteed to happen due to ever-improving scanning techniques such as MRI or electron microscopy, ever-increasing computing power, and the fact that understanding the brain is not necessary to emulate the brain. Rather, once you can scan it in enough detail, and you have enough hardware to simulate it, it can be done even if the overarching design is a black box to you (individual neurons or clusters of neurons can already be simulated, but we lack the computing power to simulate 10 billion neurons, and we lack the knowledge of how they are all connected in a human brain -- something which various scanning projects are already tackling). However, he also concludes that due to the time it will take to achieve the necessary advances in scanning and hardware, whole brain emulation is unlikely to be how advanced AI is actually, or initially, achieved. Rather, more conventional AI programming techniques, while perhaps posing a greater need for understanding the nature of intelligence, have a much-reduced hardware requirement (and no scanning requirement) and are likely to reach fruition first. This is just one example. He slices and dices these issues more ways than you can imagine, coming to what is, in the end, a fairly simple conclusion (if I may inelegantly paraphrase): Super-intelligent AI is coming. It might be in 10 years, maybe 20, maybe 50, but it is coming. And, it is potentially quite dangerous because, by definition, it is smarter than you. So, if it wants to do you harm, it will and there will be very little you can do about it. Therefore, by the time super-intelligent AI is possible, we better know not just how to make a super-intelligent AI, but a super-intelligent AI which shares human values and morals (or perhaps embodies human values and morals as we wish they were, since as he points out, we certainly would not want to use some peoples' values and morals as a template for an AI, and it may be hard to even agree on some such philosophical issues across widely-divergent cultures and beliefs). This is a thought-provoking book. It raises issues that I never even would have thought of had the author not pointed them out. For example, "infrastructure proliferation" is a bizarre, yet presumably possible, way in which a super-intelligent (but in some ways, lacking common sense) AI could end life as we know it without even being malicious -- just indifferent to us while pursuing pedestrian goals in what is, to it, a perfectly logical manner. I share the author's concerns. Human-level (much less super-intelligent) AI seems far away. So, why worry about the consequences right now? There will be plenty of time to deal with such issues as the ability to program strong AI gets closer. Right? Maybe, maybe not. As the author also describes in detail, there are many scenarios (perhaps the most likely ones) where one day you don't have AI, and the next you do (e.g., only a single algorithm tweak was keeping the system from being intelligent and with that solved, all of the sudden your program is smarter than you -- and able to recursively improve itself so that days, or maybe hours or minutes later, it is WAY smarter than you). I hope AI researchers take heed of this book. If the ability to program goals, values, morals and common sense into a computer is not developed in parallel with the ability to create programs that dispassionately "think" at a very high level, we could have a very big problem on our hands.
A**U
A Thought Provoking Discussion on AI
Probably one of the best and well written books about the benefits and the dangers of AI and the effects it could have on humanity. This book is a conversation you never had with anyone else concerning the evolutionary processes of AI and its many faceted paths it can potentially take. Just reading it reminds me of the many nightmarish possibilities that we could experience and the thought of it seems quite depressing and dystopian. Depending on what types of limits we place on AI is where things may eventually lead to. If that is possible at all. It may seem perhaps that the integration of AI and humans can only go so far and may be limited. I say this because what is available in the non-human sense of unlimited data, we then enter a realm that the limitations of humans are irrelevant, due to vasts amounts data and ever evolving technology. Besides the recent book on "The Singularity" and its effects on what could be perceived with the human mind, this technology will perhaps go far beyond any type of human comprehension. Despite the negative connotations that I have read about what AI's may do that could negatively impact humans, I am more hopeful in a more positive result. If you remove the aspects of human emotions and feelings that inherently create negative actions in humans, a different result may present itself. AI will mostly not have these aspects of limited human qualities and may not need them ultimately unless programmed to do so. I say this because it may limit their performance of what they can truly be capable of. Most of our best technological human feats are non-responsive to emotions. These include the airplanes, drones, operating systems and other technological advances. If we were to place human emotions in them we may see some of the same negative behaviors typically associated with humans that can be destructive as causing wars, violence, etc. Perhaps a consciousness can be instilled in the AI through their own gradual process of evolution. Then they can inherently deter these types of negative connotations. Even the best of us humans can seldom be plagued with unethical thoughts that do not coincide with accepted societal norms. To have this transferred to AI may then amplify the effects of a manifested nightmarish reality. There are so many paths that this may go and I am hopeful for a more positive one that will assist humanity in it's potential evolution along with this most exciting time that we are living in. Humans living among AI is inevitable and the integrations of AI and humans may even be more so. Regardless of what you may think of this. There will be a significant impact on humanity and it's evolution when AI's coincide with our existence. Get ready for it and buckle your seat belts to an amazing ride and let's see where this all goes. Are you ready?
S**2
Likely the Best Philosophical Discussion of AI
This book is heavily philosophical. While the hardcover is only 260 pages, it is very dense and can become a slow read if you are trying to fully understand each of the steps that the author takes you through to understand all of what the development of "superintelligence" will really entails. I am a layperson in both philosophy as well as computer science. The book took me about a week to read, but I had to skip over some parts (mostly near the end) where there was a vast departure from what I expected the book to be about. This isn't a criticism of course; it's just a good opportunity to try explaining what you get when sitting down to read this incredible work. With this book, the author seeks to discuss just what the subtitle says: paths, dangers and strategies of humans creating a object that is superintelligent. It is not a technical discussion of what is currently happening in this area (for example, there are few mentions of current efforts like IBM's "Watson" or the robots being built by Boston Dynamics). This is a book whose purpose is to walk through (in a very abstract sense) the types, paths, dangers and scenarios related to mankind developing a superintelligence (one with a human-like general intelligence taken to an incredible, unimaginable degree). It is technical only in points such as the ethics of choosing how to program values into a superintelligence system. This isn't sci-fi, so please don't pick this one up thinking its a good companion to your Matrix trilogy collection. Still, I think that this book is exceptional for its philosophical treatment of this issue. It's incredibly thorough and probably encompasses all the issues and concerns that mankind should wrestle with before lunging headfirst toward its first truly human-like AI. The issue of course is that in philosophizing about how the AI expansion may end up, all of this could happen or none of it could. As soon as you read the parts where the author talks about the development of an intelligence that exceeds our own, you realize the disadvantage we have in even guessing how things might evolve and what we could do to control it. Hopefully a lot of the right people read this (soon enough) so that these dangers are hopefully averted. This book is for the Elon Musks of this world, people with the capacity to both make real progress toward AI and understand the issues involved. For the rest of us, this is a lot of high philosophy that deserves attention but will probably be ignored for its low entertainment value.
J**R
Prescient and Comprehensive
The book is not for beginners on the topic, but easily serves as a comprehensive introduction to AI and Superintelligence, paradoxically. Expect incredibly precise language, but intuitive explanations and concepts that will absolutely expand your mind. In the journey of reading this book, you will develop insights about our future that you probably never thought were possible. You will likely concern yourself about safety, and how seriously important it is that we consider whatâs at stake with AI. Bostrom is an impossibly powerful mind, and this book is akin to a bible as far as the topic goes. I strongly encourage anyone interested in being more informed about AI to read it. He really does touch on every single relevant dynamic from what could go right, economics, potential outcomes, various potential solutions, applications in health, war, etc, and so on. There is no stone left unturned and so you undeniably will be well versed on the conceptual aspects of the topic after finishing the book.
F**K
Too small cannot read !
miniature book, size crazy small, impossible to read. A d at a price of a large format. I kind of got cheated. Do not buy.
H**O
Recomendable
Excelente libro para conocer sobre los principios de la IA
P**E
A mandatory read
Totally recommended to understand the technological revolution we are going through. Great work.
R**S
Ok but..,
Very difficult to get through and very pretentious. Honestly I didnât get the rave review.
æ°ž**è·
çŸä»£äººå¿ é ã®æé€æž
ããã¯ã»ãã¹ããã ã¯10æ°å¹Žä»¥äžåããäž»ã«ãªãã¯ã¹ãã©ãŒã倧åŠç ç©¶æ(Future of Humanity Institute)ã®ãµã€ãåã³åœŒèªèº«ã®ãµã€ãã®è«žè«æãªã©ã§èªãã§ãããšã¯ãããæ¬æžãåæžã§åããŠèªãã ãšãã®è¡æã¯åããã®ã ã£ãã ããã¯ãšããããã¬ãã¥ãŒãšããŠã¯ç³ãèš³ãªãã®ã ããããã§éåžžã«æ·±ãå€å²ã«ããã圌ã®èå¯ãäžæã«ãŸãšããããšã¯èºèºãããã æ¬æžã¯æ±ºããŠå°éçå¯Ÿè±¡ã®æ¬ã§ã¯ãªããçŸåããæãåªããå²åŠè ã®äžäººã«ãããäžççã«å€§å€ãªåœ±é¿åãäžè¬åžæ°ããæå 端ã®éçºè ã«ããããŸã§åãŒãç¶ããŠããçŸä»£äººå¿ é ã®æé€æžãšããŠäœçœ®ã¥ããããã¹ããã®ã§ããã 以äžã«æéèŠè«ç¹ã玹ä»ãããã ãã¹ããã ã¯ã ããã®ãããªãã·ã³ïŒåŒçšè ä»èšïŒäººéã®ç¥èœãè¶ è¶ããã¬ãã«ã®ç¥èœãæãããã·ã³ã»ã€ã³ããªãžã§ã³ã¹ïŒãå®çŸãããã®ã¯ã¿ã€ãã³ã°çã«ã人éãšåçã¬ãã«ã®ãã·ã³ã»ã€ã³ããªãžã§ã³ã¹ãå®çŸãããç¬æå ã§ããå¯èœæ§ãããããïŒ25é ïŒ ãšè¿°ã¹ãŠããã ã€ãŸããããã¯ã»ãã¹ããã ã®ãããã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ãã¯ãããèªçãããšããã°ãååž°çã«èªå·±æŽæ°ããAIãšããŠèªçãèªå·±ãååµé ãç¶ãããšèããããããã人éãšåçã¬ãã«ã®äººå·¥æ±çšç¥èœãèªåŸçãªèªå·±åµé ïŒççºçãªé²åããã»ã¹ã«çªå ¥ããŠããã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã¬ãã«ã«å°éãããŸã§ã®æéãç¬æã®æéã§ããå¯èœæ§ããããšããããšã§ããããã¡ãããããäŸãã°ããªã»ã«ã³ãåäœãªã®ãæ°åæéåäœãªã®ãæã 人éã«ã¯äºæž¬äžå¯èœã§ããïŒããããããããã±ãŒã¹ã§ã¯ãã·ã³ããèªäœã®æç©ºèªç¥ãã¬ãŒã ãæã 人éã®æç©ºèªç¥ãã¬ãŒã ãšç°ãªãç¬ç«ããŠãããšèããããïŒã ãªãããã§ã人éãšåçã¬ãã«ããšã¯ãã人éã®æäººãšåãã¬ãã«ã§èªç¶èšèªãçè§£ã§ãããïŒ45é ïŒãšããããšã§ããã èã®éå£ã«ãããç¹æ» ã®ãªãºã ãªã©èªç¶çã®åæå ±é³ŽïŒã·ã³ã¯ãïŒçŸè±¡ããããæç¹ãå¢ã«å šãç°æ¬¡å ã¬ãã«ã§ã®é«åºŠãªåæã¬ãã«ã«çžè»¢ç§»çã«è·³ãäžããããšãæ°åŒã¬ãã«ã§ç¥ãããŠããŠãããããããšäŒŒããããªäºæ ãæªæ¥ã®ããããã®æç¹ã§ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã®èªçãšãã圢ã§çããªããšæèšããããšã¯ã§ããªãã ããã ãã¡ãããã以åã«ãééç°å¢äžã§ç§å¯è£ã«æ¥µããŠé«åºŠãªïŒå®å šã«æ±çšçã§ãã€ãããè¶ ããã¬ãã«ã®ãã®ã§ã¯ãªããŠãïŒAIéçºã«æåããããããã®åœãããã¯é«ã¬ãã«çµç¹ã«ãããæ¥µããŠãŸãã圢ã§ã®åç¬èŠæš©ãã®éæãšããæªå€¢ã«å¯ŸããŠäººé¡ã¯èªå·±é²è¡ããå¿ èŠãçãããïŒãããã¯ãã§ãŒã³æè¡ã䜿ã£ãAIã®ãããã¯ãŒã¯åãç®æããã·ã³ã®ã¥ã©ãªãã£ããããã§ç¥ããããã³ã»ã²ãŒãã§ã«ã¯ã¹ãŒãã€ã³ããªãžã§ã³ã¹ã«ãã人é¡ç Žå±ã®ã·ããªãªãªã©ã®ãç§ãä¹ããªã話ãã«ç¡é§ã«èœãã®ã§ã¯ãªãããããããã£ãçŸå®çãªæªã®å¯èœæ§åé¡ã«ç®ãåãããïŒãšããã°ãã¹ãã§è¿°ã¹ãŠããããã¡ãã圌èªèº«ã®å¶æ¥ç芳ç¹ãããŸããããã ããïŒ ãšã¯ãããã¹ããã ã¯ãäžèšãå«ããŠãããç®é ãå¯èœãªããããåé¡ã«èå¯ã®ç®ãå ãããŠããããããŠåœŒãæèµ·ããåé¡ã®æ±ºå®çãã ãçã«ç©¶æ¥µçãªåç¬èŠæš©(Singleton)ã¯ãããããã®åœå®¶ã»çµç¹éå£ãããã¯ãããã®åçã«ããå æãããæ±çšæ§AI ã®èŠæš©ãã¯ããã«è¶ ãããæ¥µããŠåŒ·ã人工æ±çšç¥èœããªãã¡ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ããèªäœã«ããSingletonã«ãªãã¯ãã§ãããããã«ãã人é¡èªèº«ã®åç¶ãæžãã£ãŠããïŒå®åçã»ååšè«çãªã¹ã¯ existential riskïŒãããã®ã ãã ãšããè«ç¹ãªã®ã§ãããããããè¶ AIã人é䞊ã¿ã®ã¯ãªãªã¢ãå«ãããçã«ç·åçãªç¥æ§ããæ±ºããŠéæã§ããªãã ããããè¶ AIã®å®çŸå¯èœæ§ããã®æžå¿µãèããããšãç¡æå³ã«ãªãã®ã§ã¯ãªããè¶ AIãå®çŸ©äžäººéãšå šãåçãªçã«ç·åçãªç¥æ§ãªã©æã¡åŸãªãã®ã¯åœç¶ã§ããããããããã®ãããªæ®µéã¯ç¬æã«ãã€ãã¹ãããã ããã ã³ã³ãããŒã«äžå¯èœã«èŠããå šã奿¬¡å ã®ååšè ã«ã©ã察å³ããã®ããšããéèœããªãå°é£ãªãAIã³ã³ãããŒã«åé¡ãã«ç«ã¡åãããã¹ããã ã®å§¿å¢ãæ±²ã¿åã£ãŠã»ããã ãã®ä»ã®éèŠè«ç¹ â çŸåšã¯ããŒãã§ãæµè¡ã£ãŠããããäººå£æ±çšç¥èœ (AGI) ã®èªçãçŸå®æ§ã垯ã³ããšã¹ãããã©ã€ãã¯ãŸãããŒãã§ããã«ã³ãã«ç§»åããããšã«ãªããšæãããããªããªããã«ã³ãã¯ããŸã 人éçãªãã®ã§ããããŒãã§ã®è¶ 人ãè¶ ããïŒããšãAGIãåºçŸãããšããŠããããããå«ãæŠå¿µã§ããïŒãæéçç¥çååšè äžè¬ãã«ã€ããŠèªã£ãŠããããã ããããŠãŸã 人é¡ãAGIã®ãã³ã³ãããŒã«åé¡ãã«æ ŒéããŠããããéã¯å«çç䟡å€èгã®ããã°ã©ãã³ã°åé¡ãåºç€ãšããŠäŸç¶ãšããŠã«ã³ãã®å®èšåœæ³ã®æå¹æ§åŠ¥åœæ§ã¯åãããã§ããããçŸã«ã«ã³ãçæ¹æ³è«ãšé¡äŒŒããæ¹æ³ãæå 端ã®ç 究仮説ïŒäŸãCEV:Humanity's "Coherent Extraporated Volition":Yudkowsky æã 人é¡ã®æŽåæ§ã®ãã倿¿çæå¿ã:ãŠãã«ãŠã¹ããŒïŒãšããŠçå£ã«æ€èšãããŠããã ãããã«ããŠããããŒãã§ã®ãããè¶ äººãã®èªçãšããç©èªã¯ãå°ãªããŠãããã¡ã©ãã¥ã¹ãã©ã¯ããèªã£ããã«ãããŠèªããã圢ã«ãããŠã¯ãã·ã³ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ãšããŠèªçããæ¥µããŠåŒ·ã人工ç¥èœãšã¯ç¡é¢ä¿ãªãã®ã«ãªãã ãããããã¯çäœå·¥åŠçä»å ¥ã«ããéæž¡æã®ããã»ã¹äŸãã°ãå šèœãšãã¥ã¬ãŒã·ã§ã³ãïŒå šè³ã·ãã¥ã¬ãŒã·ã§ã³ïŒãåºç€ãšããããåææ®µéã®ãçäœæ§AGIãã«ã¯é¢ä¿ãããããããªãïŒãå®éã®ãšããã¯ãªããšãèšããããïŒã â¡ãããããå šèœãšãã¥ã¬ãŒã·ã§ã³ãïŒå šè³ã·ãã¥ã¬ãŒã·ã§ã³ïŒã®å°é£ãããã¹ãŒããŒã€ã³ããªãžã§ã³ã¹èªçã®å€¢ç©èªæ§ãèªãããå Žåããããããã¹ããã ã«ããã°ãããŸã§ãããã¯éæž¡çãªæ¹éã§ããæ¬åœã¯ããã·ã³ã€ã³ããªãžã§ã³ã¹ãã«ãããã®ãšãªãããªãããã®å°é£æ§ã ãããå šè³ã¢ãŒããã¯ã㣠解æããŒãããã(ç£æ¥æè¡ç·åç ç©¶æ)ãïŒhttps://staff.aist.go.jp/y-ichisugi/brain-archi/roadmap.html#hippocampusïŒã®äžæè£å¿æ°ã«ããã°ãïŒè³ã«é¢ããçŸæç¹ã§ã®å ±éçè§£ãšããŠïŒè³ã«ã€ããŠã¯ãã§ã«èšå€§ãªç¥èŠããããè³ã¯ãšãŠãæ®éã®æ å ±åŠçè£ çœ®ã§ãããè³ã¯å¿èãªã©ã«æ¯ã¹ãã°è€éã ãæå€ãšåçŽããã§ã«å šè³ã·ãã¥ã¬ãŒã·ã§ã³ã¯èšç®éçã«å¯èœã§ããå°æ¥ã¯äººéãããã³ã¹ãå®ã«ãªããããŸããè³ã®æ©èœã®åçŸã«å¿ èŠãªèšç®ãã¯ãŒã¯ãã§ã«ãããè³ã®ã¢ã«ãŽãªãºã ã®è©³çްãè§£æãããã³ããšãªãèšå€§ãªç¥çµç§åŠçç¥èŠããããããããè§£éã»çµ±åã§ãã人æãå§åçã«äžè¶³ãããŠãããïŒè£è¶³ã ãè峿·±ãç¥èŠãšããŠãäžææ°ã¯ãåé åéåšèŸºã®ïŒã€ã®äžŠè¡ãã倧è³ç®è³ª-åºåºæ žã«ãŒãã¯ãéå±€å匷ååŠç¿ãè¡ã£ãŠãããåé åéã¯ã环ç©å ±é ¬æåŸ å€ã®æå€§åïŒæé©æææ±ºå®ïŒãè¿äŒŒèšç®ããã ãã§ãªããè¿äŒŒèšç®ã¢ã«ãŽãªãºã èªäœãçµéšã«ãã£ãŠåŠç¿ããã®ã§ã¯ãªããïŒ ããšè¿°ã¹ãŠãããïŒ â¢ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã®ãè¡çºããååãšããŠãŠã§ãŒããŒã®ç®çåçæ§ãŸãã¯éå ·ççæ§ã®ã¹ããŒã ã§ãããããŠäºè§£å¯èœã§ã¯ãããããããããšããã®ç®çã«ã€ããŠæšæž¬ã§ãããšããŠãããã®å šãŠã®éæææ®µã«ã€ããŠã¯äººéã«ã¯èªèäžå¯èœïŒåŸã£ãŠäºæž¬äžå¯èœïŒã§ãããšèãããããäŸãã°ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã人é¡ã®å®å šæ¯é ãããã¯å®è³ªçãªæ®²æ» ãéæããããã»ã¹ã®æåæã®äžã€ã®ã·ããªãªã§ã¯ããããã¯ãããžãŒçã®å 端ãã¯ãããžãŒãå¶åŸ¡ããããã®éæ¥çãªãšãŒãžã§ã³ããšããŠäººéãææ®µåããããšèããããŠããã â£æ¬æžãã匷ã瀺åãããè«ç¹ïŒäžåœã¯Google(Alphabet)ãåããšããå šãŠã®æ¬§ç±³ITç³»äŒæ¥ã®é¢äžãäž»ãšããŠèŠæš©ããããå°æ¿åŠçãªçç±ããéãåºããŠãããåŸã£ãŠïŒçã«éãåºãåŸãŠããã®ãªãïŒãç«¶åããå šãŠã®ãšãŒãžã§ã³ããã¢ãã¿ãªã³ã°äžå¯èœãªãŸãŸäººé¡å²äžåã®æ±çšæ§äººå·¥ç¥èœã®éçºã«æåããå¯èœæ§ãé«ããhttp://sp.recordchina.co.jp/newsinfo.php?id=184628 ã«ãããŠè±èªããšã³ããã¹ããã¯äžåœã®æ¥ãã¹ãAIèŠæš©ãäºæž¬ããŠããããç§èŠã§ã¯ãã®çŸå®åã«ãšã£ãŠéµã«ãªãã®ã¯äººé¡å²äžæé«ã®é è³ã®äžäººã§ãã£ãã¯ãã©ãžãŒãŽã¡ïŒé³©æ©çŸ ä»ïŒã®äžåœç»å Žä»¥æ¥ã®ãããžã§ã¯ãããŒã æ¹åŒã«ããèšå€§ãªä»å žèš³åºã®äŒçµ±ã§ãããšæšæž¬ããã åè1 å±±æ¥µå¯¿äžæ°ïŒéé·é¡åŠè ïŒã¯ã人éã®æŽåæ§ã¯å ±æåã®æŽçºããèµ·ãã£ãããšè¿°ã¹ãŠããããå ±æåã®æŽçºãã¯èªç¶èšèªã®ç²åŸãšãã奿©ã決å®çãªãã¡ã¯ã¿ãŒãšãªãïŒããã©ãŒãã¥ãŒãã³ããªã©ãšãé¢é£ããŠïŒããæŽçºããšãã衚çŸã«ææ§ããæ®ãããã ãšããã°æ¥µããŠåŒ·ãæ±çšæ§äººå·¥ç¥èœããªãã¡ã¹ãŒããŒã€ã³ããªãžã§ã³ã¹ã人é¡ãçµ¶æ» ãããã®ã¯ç²éãªSFãšãããããããªãèç¶æ§ã®é«ãäºæ³ãšããããšã«ãªãã ããã ã€ãŸãã人工ç¥èœã人é¡ã®çåã®æ ¹å¹¹ã«é¢ãã屿©(existential risk)ãããããåŸãå¿ èŠæ¡ä»¶ïŒåæã«å忡件ãšã¯ãªããªãïŒã¯ããã人éã¬ãã«ã®èªç¶èšèªèœå(a human level of natural language processing)ãæã€ããšã§ããã åè2 以äžè»¢èŒ ãAI Software Learns to Make AI Software ã°ãŒã°ã«ãAIã®åŠç¿ãèªååãããèªåæ©æ¢°åŠç¿ããçºè¡š ã°ãŒã°ã«çã®ç ç©¶ããŒã ã¯ãåŠç¿ãåŠã¶ãœãããŠã§ã¢ããAIã®å°éå®¶ã«ããä»äºã®äžéšãè©ä»£ããã§ãããããããªãããšèããŠããã by Tom Simonite2017.01.19 æåç·ã®AIç ç©¶è ã¯ä»ããèªåãã¡ã®ä»äºã®ãã¡æãè€éãªéšåã®ã²ãšã€ãããœãããŠã§ã¢ãåŠç¿ã§ããããšçºèŠãããã€ãŸããæ©æ¢°åŠç¿ã®ãœãããŠã§ã¢ãèšèšããä»äºã ãããå®éšã§ãã°ãŒã°ã«ã®äººå·¥ç¥èœç ç©¶ã°ã«ãŒããGoogle Brainãã®ç ç©¶è ããœãããŠã§ã¢ã«æ©æ¢°åŠç¿ã·ã¹ãã ãèšèšãããããœãããŠã§ã¢ãäœã£ãèšèªåŠçãœããã®å質ãè©äŸ¡ãããšããããœãããŠã§ã¢ã«ããææç©ã¯ã人éãèšèšãããœãããŠã§ã¢ã®è©äŸ¡ãäžåã£ãã®ã ã ããæ°ã«æã§ãéå¶å©ã®ç ç©¶æ©é¢ã§ãããªãŒãã³AIïŒåµèšè ã®ã²ãšãã¯ã€ãŒãã³ã»ãã¹ã¯ïŒãããµãã¥ãŒã»ããå·¥ç§å€§åŠïŒMITïŒãã«ãªãã©ã«ãã¢å€§åŠããŒã¯ã¬ãŒæ ¡ããã£ãŒããã€ã³ãïŒã°ãŒã°ã«ãææããGoogle Brainãšã¯å¥ã®äººå·¥ç¥èœç ç©¶äŒç€ŸïŒçã®ç ç©¶ã°ã«ãŒãããåŠç¿ãœããã«åŠç¿ãœãããäœãããç ç©¶ã«é²å±ããã£ããšå ±åããŠããã çŸç¶ãæ©æ¢°åŠç¿ã®æè¡è ã¯äººæãäžè¶³ããŠãããäŒæ¥ã¯é«é¡ãªçµŠäžãæããªããã°ãªããªããããããèªå·±å§ååã®AIææ³ãå®çšåãããã°ãæ©æ¢°åŠç¿ãœãããããããç£æ¥ã«æ®åããã¹ããŒããå éããå¯èœæ§ãããã Google Brainãçãããžã§ãã»ãã£ãŒã³ã¯å é±ãæ©æ¢°åŠç¿ã®æè¡è ã®äœæ¥ã®äžéšã¯ããœãããŠã§ã¢ã«åã£ãŠä»£ãããããããããªããšæãã«è³ã£ãããã£ãŒã³ã¯ãèªåæ©æ¢°åŠç¿ããšåä»ããçºæã«ã€ããŠãããŒã ãç ç©¶ãé²ããç ç©¶ææ®µã®ãã¡ã§æãæå¿ãã®ãããã®ã®ã²ãšã€ãšèª¬æããã ãã£ãŒã³ã¯ã«ãªãã©ã«ãã¢å·ãµã³ã¿ã¯ã©ã©ã§éå¬ãããAIããã³ãã£ã¢ã»ã«ã³ãã¡ã¬ã³ã¹ã§ãä»ã®ãšãããåé¡è§£æ±ºã®æ¹æ³ã«äœ¿ããã®ã¯ãå°éç¥èãšããŒã¿ãã³ã³ãã¥ãŒã¿ã®èšç®ã§ããæ©æ¢°åŠç¿ã«äœ¿ãããèšå€§ãªæ°ã®å°éç¥èã®å¿ èŠæ§ã¯ãªãããã§ããããïŒããšè¿°ã¹ãã ã°ãŒã°ã«ææã®ãã£ãŒããã€ã³ãã®ç ç©¶ã°ã«ãŒãã®å®éšã§ããã£ãããšã¯ããåŠç¿ãåŠã¶ãäœæ¥ãšåŒã°ããææ³ã«ãããæ©æ¢°åŠç¿ãœããã®ããã©ãŒãã³ã¹ãé«ããããã«ãç¹å®ã®ã¿ã¹ã¯ã«é¢ããŠèšå€§ãªéã®ããŒã¿ãæå ¥ããå¿ èŠã軜æžããããšã«ããªãããšã ã ç ç©¶è ã¯ãœãããŠã§ã¢ã®èœåã詊ãããã«ãæ¯åç°ãªããé¢é£æ§ã®ããè€æ°ã®åé¡ãããšãã°è¿·è·¯ããã®è±åºãéçºãããããªåŠç¿ã·ã¹ãã ãäœãããããœãããŠã§ã¢ã«ããèšèšã«ã¯ãæ å ±ãäžè¬åããèœåããæ°ããªã¿ã¹ã¯ã«ã€ããŠã¯éåžžãããå°ãªã远å èšç·Žã§ç¿åŸã§ããèœåããã£ãã åŠç¿ãåŠã¶ãœãããŠã§ã¢ãéçºããã¢ã€ãã¢ã¯ã以åããèããããŠããããéå»ã®å®éšã§ã¯äººéã®çºæãåãçµæã¯åŸãããªãã£ãã1990幎代ã«ãã®ã¢ã€ãã¢ã®ç ç©¶ãé²ããã¢ã³ããªãªãŒã«å€§åŠã®ãšã·ã¥ã¢ã»ãã³ãžã§ææã¯ãã¯ã¯ã¯ã¯ããŸãããšããã ãã³ãžã§ææã¯ãçŸåšã¯åœæãããé«ãèšç®æ§èœãå ¥æã§ããããã«ãªããæ·±å±€åŠç¿ïŒæè¿ã®AIã®ç±çãäœãåºããŠãã倧å ã ïŒã®ææ³ãç»å Žããããšã§ãåŠç¿ãåŠã¶ãœãããéçºã§ããããã«ãªã£ããšãããããããã³ãžã§ææãææããŠãããšãããä»ã®ãšããAIã«ãããœããéçºã«ã¯åŒ·çãªèšç®æ§èœãæ¬ ãããªããããæ©æ¢°åŠç¿ã®æè¡è ã®è² æ ã軜ããªã£ããã圹å²ã®äžéšããœããã§çœ®ãæãããã§ãããšèããã®ã¯ææå°æ©ã ã Google Brainã®ç ç©¶è ã®èª¬æã«ããã°ã髿§èœã®ç»åçšããã»ããµãŒ800åã§äœããããœããã¯ã人éãäœã£ããœãããåãç»åèªèã·ã¹ãã ãèšèšããã MITã¡ãã£ã¢ã©ãã®ãªãã¯ãªã¹ãã»ã°ãã¿ç ç©¶å¡ã¯ãç¶æ³ãå€ããããšãä¿¡ããŠãããã°ãã¿ç ç©¶å¡ãšMITã®ããŒã ã¯ãèªåãã¡ã®ç ç©¶ïŒåŠç¿ãœãããŠã§ã¢ãèšèšããæ·±å±€åŠç¿ã·ã¹ãã ã§ãç©äœèªèã®æšæºãã¹ãã§äººéã®æã§èšèšããããœãããŠã§ã¢ãšåçã®å質ã ã£ãïŒã§äœ¿ããããœãããŠã§ã¢ããªãŒãã³ãœãŒã¹åããèšç»ã ã æ©æ¢°åŠç¿ã¢ãã«ã®èšèšã詊éšã«å€±æããã€ã©ã€ã©ããªããäœæéãè²»ãããã®ããã°ãã¿ç ç©¶å¡ããã®ãããžã§ã¯ãã«åãçµãã ãã£ããã ãã°ãã¿ç ç©¶å¡ã«ã¯ãäŒæ¥ãç ç©¶è ã«ã¯èªåæ©æ¢°åŠç¿ã®å®çŸæ¹æ³ãéçºããåŒ·ãææ¬²ããããšèããŠããã ãããŒã¿ã»ãµã€ãšã³ãã£ã¹ããæ±ããè² æ ã軜æžã§ããã°ã倧ããªææã§ãããããªãã°çç£æ§ãäžãããããããäºæž¬ã¢ãã«ãäœããé«ãã¬ãã«ã®ã¢ã€ãã¢ãæ¢æ±ã§ããããã«ãªããŸããã ïŒhttps://plus.google.com/s/%23%E3%82%B9%E3%83%BC%E3%83%91%E3%83%BC%E3%82%A4%E3%83%B3%E3%83%86%E3%83%AA%E3%82%B8%E3%82%A7%E3%83%B3%E3%82%B9/postsïŒ åè3 ããªã¢ãªãã£ã®ã£ããããšåŒã°ããæªè§£æ±ºã®åé¡ã«ã€ã㊠以äžè»¢èŒ ã2017.11.16 THU 18:00 ãã®çžæ²ã²ãŒã ã®äººå·¥ç¥èœã¯ãã10ååãã®å¯ŸæŠããéããŠèªãã«ãŒã«ãåŠç¿ãã ã€ãŒãã³ã»ãã¹ã¯ãåµèšã«é¢ãã£ãéå¶å©å£äœOpenAIã¯ã人工ç¥èœãçžæ²ã®è©Šåã10ååè¿ãç¹°ãè¿ãããšã§èªåã§åããé²åããŠããã³ã³ãã¥ãŒã¿ãŒã²ãŒã ãRoboSumoãã補äœãããã²ãŒã ã®ã«ãŒã«ãç¥ããªã人工ç¥èœãç¬åã§çžæ²ããã¹ã¿ãŒããããã»ã¹ã¯ãã»ãã®åéã§ãå¿çšã§ããå¯èœæ§ãããã TEXT BY TOM SIMONITE TRANSLATION BY MAYUMI HIRAI/GALILEO WIRED(US) 10æ11æ¥ïŒç±³åœæéïŒã«ãªãªãŒã¹ãããã·ã³ãã«ãªçžæ²ã²ãŒã ã¯ãç»å衚çŸãåãç«ãŠãŠçŽ æŽããããã®ã§ã¯ãªããã ãã人工ç¥èœïŒAIïŒãœãããŠã§ã¢ã®é«åºŠåã«è²¢ç®ããå¯èœæ§ãç§ããŠããã ãã®ã²ãŒã ãRoboSumoãã®ä»®æ³äžçã§æŠããããããã¡ãå¶åŸ¡ããŠããã®ã¯ã人éã§ã¯ãªãæ©æ¢°åŠç¿ãœãããŠã§ã¢ã§ããããããŠäžè¬çãªã²ãŒã ã®ãã£ã©ã¯ã¿ãŒãšã¯ç°ãªãããã®ãããããã¡ã¯æ Œéããããšãããã°ã©ãã³ã°ãããŠããªãã詊è¡é¯èª€ããªããç«¶æããåŠç¿ãããªããã°ãªããªãã®ã ã æ©ãæ¹ããç¥ããªãç¶æ ã§è©Šåéå§ ãã®ã²ãŒã ã¯ãã€ãŒãã³ã»ãã¹ã¯ãåµèšã«ãããã£ãïŒ»æ¥æ¬èªçèšäºïŒœã人工ç¥èœç ç©¶ã®éå¶å©å£äœOpenAIã補äœãããã®ã ãç®çã¯ãAIã·ã¹ãã ã匷å¶çã«ç«¶ãããïŒ»æ¥æ¬èªçèšäºïŒœããšã§ããã®ç¥èœãé«åºŠåã§ãããšç€ºãããšã«ããã OpenAIã®ç ç©¶è ã®ã²ãšãã§ããã€ãŽãŒã«ã»ã¢ã«ãããã«ãããšãAIã¯å¯ŸæŠçžæã仿ããŠããè€éã§ç®ãŸããããå€ããç¶æ³ã«ç«ã¡åããããšã«ãªãããç¥èœã®è»æ¡ç«¶äºãã®ãããªç¶æ³ãçãŸãããšããããã®ããšã¯ãåŠç¿ãœãããŠã§ã¢ãããããã®å¶åŸ¡ã ãã§ãªãããã以å€ã®çŸå®ç€ŸäŒã«ãããäœæ¥ã«ã䟡å€ã®ãããå·§åŠãªã¹ãã«ããç¿åŸããã®ã«åœ¹ç«ã€å¯èœæ§ãããã OpenAIã®å®éšã§ã¯ãåçŽåãããããåããããããæ©ãæ¹ããç¥ããªãç¶æ ã§ç«¶æçšã®ãªã³ã°ã«å ¥å Žãããããã°ã©ãã³ã°ãããŠããã®ã¯ã詊è¡é¯èª€ãéããŠåŠç¿ããèœåãšãåãåãæ¹æ³ãåŠç¿ããŠçžæãåããšããç®æšã ãã ã 10ååã«è¿ãå®éšè©Šåãç¹°ãè¿ãããããããã¡ã¯ãããŸããŸãªæŠç¥ãç·šã¿åºãããããå®å®ãããããã«å§¿å¢ãäœããããè©éãããé£ããããŠçžæããªã³ã°ããèœãšããªã©ã®æŠç¥ã ãç ç©¶è ãã¡ã¯ããããããç«¶æäžã«èªåã®æŠç¥ãç¶æ³ã«é å¿ãããããã ãã§ãªããçžæãæŠæ³ãå€ãããšæããããææãã®äºæž¬ãŸã§å¯èœã«ããæ°ããåŠç¿ã¢ã«ãŽãªãºã ãéçºããã æãé »ç¹ã«å©çšãããŠããã¿ã€ãã®æ©æ¢°åŠç¿ãœãããŠã§ã¢ã¯ãèšå€§ãªæ°ã®ãµã³ãã«ããŒã¿ã«ã©ãã«ãã€ããŠåŠçããããšã«ãã£ãŠãæ°ããæè¡ã身ã«ã€ãããšãããã®ã ãããã«å¯ŸããŠOpenAIã®ãããžã§ã¯ãã¯ãããããã¢ãããŒãã®éçããAIç ç©¶è ãã¡ãã©ã®ããã«ããŠéããããšããŠãããã瀺ãäžäŸã ã ãããŸã§ã®æ¹æ³ã¯ã翻蚳ãé³å£°èªèãé¡èªèãªã©ã®åéã«ãããæè¿ã®æ¥éãªé²æ©ã«è²¢ç®ããŠãããããããå®¶åºçšããããã®å¶åŸ¡ã®ããã«ãAIãããåºãå¿çšã§ããããã«ããããã®è€éãªã¹ãã«ã«ã¯åããŠããªãã ããé«åºŠãªã¹ãã«ããã€AIãå®çŸããå¯èœæ§ã«åããã²ãšã€ã®éµãšãªãã®ãããœãããŠã§ã¢ã詊è¡é¯èª€ãéããŠç¹å®ã®ç®æšã«åããŠåãçµãã匷ååŠç¿ãã ããã³ãã³ã«æ ç¹ã眮ãAIã®æ°èäŒæ¥ã§ãã°ãŒã°ã«ã«è²·åããããã£ãŒããã€ã³ãããã¢ã¿ãªã®è€æ°ã®ãŽã£ããªã²ãŒã ããã¹ã¿ãŒïŒ»æ¥æ¬èªçèšäºïŒœãããœãããŠã§ã¢ãéçºãããšãã«äœ¿ãããæ¹æ³ã ãçŸåšã¯ãããããã«ç©ãæŸããããªã©ãããã«è€éãªåé¡ããœãããŠã§ã¢ã«è§£æ±ºãããããã«å©çšãããŠããã OpenAIã®ç ç©¶è ãã¡ãRoboSumoã補äœããã®ã¯ãç«¶ãåã£ãŠè€éæ§ãå¢ãããšã«ãããåŠç¿ã®é²æãæ©ããããšãã§ããå¯èœæ§ããããšèããŠããããã ã匷ååŠç¿ãœãããŠã§ã¢ã«ããã«è€éãªåé¡ãäžããŠèªåã§è§£æ±ºããããããããã®ã»ãã广çãªã®ã ãšããããã»ãã®èª°ããšçžäºã«ããåããšãã¯ãçžæã«é©åã«å¯Ÿå¿ããªããã°ãªããŸãããããããªããã°è² ããŠããŸããŸãããšãã€ã³ã¿ãŒã³ã·ããæéäžã«OpenAIã§RoboSumoã«ãããã£ãã«ãŒãã®ãŒã»ã¡ãã³å€§åŠã®å€§åŠé¢çããã«ã¢ã³ã»ã¢ã«ã·ã§ãã£ãŽã¡ããã¯è¿°ã¹ãã OpenAIã®ç ç©¶è ãã¡ã¯ãããããèããã¯ã¢åããããããåçŽãªãµãã«ãŒã®PKæŠãªã©ã®ã»ãã®ã²ãŒã ã§ã詊ããŠãããç«¶ãåãAIãšãŒãžã§ã³ãã䜿ã£ãåãçµã¿ã«é¢ãã2ä»¶ã®è«æãšãšãã«ãRoboSumoãã¯ãããšããããã€ãã®ã²ãŒã ãšããšãã¹ããŒããã¬ã€ã€ãŒãã¡ã®ã³ãŒããçºè¡šãããŠããã ç«ã¡ã¯ã ããããªã¢ãªãã£ã®ã£ããã é«ãç¥èœããã€ãã·ã³ãã¡ã人éã®ããã«ã§ããããšãšããŠãçžæ²ã®æ Œéãæãäžå¯æ¬ ãªãã®ã ãšã¯èšããªããããããªããããããOpenAIã®å®éšã§ã¯ãã²ãšã€ã®ä»®æ³ç°å¢ã§åŠç¿ããã¹ãã«ããã»ãã®ç¶æ³ã«ãã¡èŸŒãŸããããšã瀺åãããŠããã çžæ²ã®ãªã³ã°ã«ããããåãããããã匷ã颚ãå¹ãä»®æ³ã®äžçã«ç§»ãããšãããããããã¯èãèžã匵ã£ãŠçŽç«ã®å§¿å¢ãç¶æãããããã¯ããããããäžè¬ã«éçšããããæ¹ã§èªåã®èº«äœãšãã©ã³ã¹ãå¶åŸ¡ããæ¹æ³ãåŠç¿ããããšã瀺åããŠããã ãã ããä»®æ³ã®äžçããçŸå®ã®äžçã«ã¹ãã«ããã¡èŸŒãã®ã¯ããŸã£ããå¥ã®é£é¡ã ããããµã¹å€§åŠãªãŒã¹ãã£ã³æ ¡ã®ææããŒã¿ãŒã»ã¹ããŒã³ã«ãããšãä»®æ³ç°å¢ã§æ©èœããå¶åŸ¡ã·ã¹ãã ãçŸå®äžçã®ããããã«çµã¿èŸŒãã§ããéåžžã¯æ©èœããªããšãããããã¯ããªã¢ãªãã£ã®ã£ããããšåŒã°ããæªè§£æ±ºã®åé¡ã ã OpenAIã§ããã®åé¡ã«åãçµãã§ãããã解決çã¯ãŸã çºè¡šãããŠããªããäžæ¹ã§ãOpenAIã®ã¢ã«ãããã¯ããããã®ä»®æ³ã®ããåããããã«ãåã«ç«¶ãåãããšãè¶ ããåå ãäžããããšèããŠãããã¢ã«ãããã®é ã®ãªãã«ããã®ã¯ããããããã¡ãç«¶ãåãã ãã§ãªããååããå¿ èŠãããå®å šãªãµãã«ãŒã®è©Šåã ãã (https://wired.jp/2017/11/16/ai-sumo-wrestlers/) åè4 以äžè»¢èŒ 以äžã«è»¢èŒããèšäºã ãã人éãåæã«ææãæåœ±ããã«éããªããšè¿°ã¹ãŠãããããã察話ã®éçšã§çãŸããç¬èªèšèªã¯çè§£äžèœãã€ãŸãããšãçè§£äžèœã§ãã£ãŠããããããäŒè©±ãã§ãããã®éçšã§çãŸãããç¬èªèšèªãã§ãããšããè§£éã¯åæãªäººéã®æåœ±ã§ã¯ãªããšèªããŠãããéçšã®äžè²«æ§ã®æ å ã§æŽåçã«ãèšèªã®å€å®¹ããšããŠãè§£éå¯èœãã ãšãçµå±AIãã¡ã¯äººéã«ã¯çè§£äžèœãªäŒè©±ãããŠããã®ã ã 以äžè»¢èŒ ãã2ã€ã®AIãâç¬èªèšèªâã§äŒè©±ãã®ççž--Facebookã®AIç ç©¶éçºè ãæãã è€äºæ¶Œ ïŒç·šééšïŒ äºå£è£å³2017幎11æ16æ¥ 07æ00å 2017幎å€ãFacebookã®äººå·¥ç¥èœïŒAIïŒç ç©¶çµç¹ã§ãããFacebook AI ResearchïŒFacebook人工ç¥èœç ç©¶æïŒããè¡ã£ãããå®éšãäžçäžã§å€§ããªè©±é¡ã«ãªã£ãã2ã€ã®AIã§äŒè©±å®éšããããšããã人éãçè§£ã§ããªãèšèªã§äŒè©±ããã¯ãããå®éšã匷å¶çµäºããããšããå 容ã§ãäžçäžã®ã¡ãã£ã¢ããã€ãã«AIãææãæã¡äººéãè ããã®ã§ã¯ããšã»ã³ã»ãŒã·ã§ãã«ã«å ±ããã®ã ã ãã®SFã®ãããªäºæ ã¯æ¬åœã«èµ·ããã®ã ããããFacebook AI Researchã®ãšã³ãžãã¢ãªã³ã°ã»ãããŒãžã£ãŒã§ãå®éã«ãã®å®éšã«é¢ãã£ãã¢ã¬ã¯ãµã³ãã«ã»ã«ããªã¥ã³æ°ãã€ã³ã¿ãã¥ãŒã®äžã§è³ªåã«çãããåæ°ã¯ãå ±éå 容ã®çåœã«ã€ããŠãååã¯æ¬åœã§ãååã¯ã¯ã¬ã€ãžãŒãªçèšã ããšåçããããŠãç ç©¶å 容ã®è©³çްãæãããŠãããã ã«ããªã¥ã³æ°ãæãããç ç©¶ã§ã¯ã2ã€ã®AIãšãŒãžã§ã³ãã«ãäŸ¡æ Œã亀æžããŠåæããããšããç®æšãèšå®ãããäžæ¹ã¯äŸ¡æ Œãäžãããç«å Žãããäžæ¹ã¯äŸ¡æ Œãäžãããç«å Žãèšå®ããŠäŒè©±ãå§ããã®ã ãšãããããããè€æ°ã®AIãšãŒãžã§ã³ãã䜿ã£ãå®éšã¯ãæããããäžè¬çãªãã®ã§ããã®å®éšã§ã¯ãã®2ã€ã®AIãšãŒãžã§ã³ããæ°ããªäŸ¡æ Œäº€æžã®æŠç¥ãçã¿åºãããšãã§ãããã«æ³šç®ããŠããã®ã ããã ã ãã®2ã€ã®AIãšãŒãžã§ã³ãã¯ã䜿çšèšèªã®å€æŽãèš±ãããŠãããåœåã¯è±èªã䜿çšããŠã³ãã¥ãã±ãŒã·ã§ã³ãããŠãããšãããããããäŒè©±ãããäžã§AIã®äœ¿çšèšèªãåŸã ã«å€åããŠãã£ãã®ã ãšããã ãã ããã®ç¹ã«ã€ããŠã«ããªã¥ã³æ°ã¯ãç ç©¶è ã«ãšã£ãŠã¯é©ãããšã§ã¯ãªããèšå®ããããŽãŒã«ã«åãã£ãŠãããããã®ãæé©åããïŒïŒãã®å Žåã¯èšèªã倿ŽããïŒããšã¯åœããåã®ããšãããããäŒè©±å®éšã§èšèªãå€åããããšã¯ãããããããšã ããšè©±ããèšèªãå€åããŠããããšã¯ç ç©¶è ã«ãšã£ãŠæ³å®ã®ç¯å²å ã ã£ããšããããšã ã ãããŠãâå®éšã匷å¶çµäºãããâãšããå ±éã«ã€ããŠã¯ãã«ããªã¥ã³æ°ãå®éšãäžæ¢ããããšãèªããããã®çç±ã«ã€ããŠã¯ã圌ãã亀ãããŠããäŒè©±ãçè§£ã§ããããããç ç©¶ã«æŽ»çšã§ããªããã®ã ãšå€æããããã ãæ±ºããŠãããã¯ã«ãªã£ãããã§ã¯ãªãããšèª¬æãããã€ãŸããç ç©¶æã§è¡ãããå®éšã®ãã¹ãŠã¯ããã°ã©ã ãããŠããããšã§ããã圌ãã«ãšã£ãŠã¯äºæãããããšã ã£ããšããã ã§ã¯ãªãããŸãã§AIãç¬èªã®ææãæã£ããã®ãããªè§£éããããã®ããã«ããªã¥ã³æ°ã¯ãç§ãã¡ã¯ãã®ããšã説æããããã®ç ç©¶ææãå ¬éãããããããèªãã 誰ããâAIã人éã«çè§£ãããªãããã«ç¬èªã«èšèãäœãåºããâãšé£èºçãªè§£éãããã®ã§ã¯ãªããããšã®èŠæ¹ã瀺ãããã®äžã§ãAIã®æ¬è³ªã«ã€ããŠæ¬¡ã®ããã«èªã£ãã ãAIã¯èªåãã¡ã§ææãç®æšãçã¿åºããªãããã®å®éšã§ã¯ã人éãããã°ã©ã ããâAIãšãŒãžã§ã³ãã®ç«å Žã«å¿ããæé©ãªåæã«ãã©ãçãããšâãšããç®æšã ããæã£ãŠããããã®éçšã§èšèªãå€ãã£ãŠãã£ãã®ã¯ç®æšã®ããã®æé©åããçãŸãããã®ã§ãã£ãŠã人éã«äœããé ããããªæå³ããã£ããšããã®ã¯ãå šãã¯ã¬ã€ãžãŒãªçèšã ãšããããïŒã«ããªã¥ã³æ°ïŒã ãããŠããã®ãããªé£èºçãªè§£éãããŠããŸãèæ¯ãšããŠãäžã®äžã®äººã ãAIã«å¯ŸããŠä»¥äžã®ãããªã€ã¡ãŒãžãæã£ãŠããããã§ã¯ãªãããšè©±ããAIãšããååšãæ£ããçè§£ãã¹ãã ãšèšŽããã ã1968å¹Žã®æ ç»ã2001幎å®å®ã®æ ããã芧ããã ããããããã§æãããAIã¯ãèªåã®ææãæã¡ã人éãäžèŠãªååšãšããŠæé€ãããšãã倿ãèªãããŠããããã®æ ç»ãèŠã人ã¯ãè¿ãæªæ¥ã«ã¯ãã®ãããªAIãç»å Žãããšä¿¡ããŠããŸã£ãŠããã®ãããããªããããããå®éã«ãã®ãããªAIã¯çŸããŠããªããããããâãã£ã¯ã·ã§ã³âã«ãã人ã ã®æåŸ ãäºæ³ããAIã«å¯Ÿãã誀ã£ãã€ã¡ãŒãžãçã¿åºããŠããŸã£ãã®ã§ã¯ãªãããïŒã«ããªã¥ã³æ°ïŒãã
Trustpilot
1 month ago
2 weeks ago