1 00:00:00,000 --> 00:00:06,970 2 00:00:06,970 --> 00:00:08,770 I guess I'll have you introduce yourself. 3 00:00:08,770 --> 00:00:12,400 Yeah, I'm Bart Selman, currently at the Department of Computer 4 00:00:12,400 --> 00:00:14,890 Science at Cornell University. 5 00:00:14,890 --> 00:00:15,730 Cool. 6 00:00:15,730 --> 00:00:18,580 What first got you into the field of AI? 7 00:00:18,580 --> 00:00:20,920 I guess I studied physics. 8 00:00:20,920 --> 00:00:23,080 Actually, I did a masters in physics in Holland 9 00:00:23,080 --> 00:00:28,360 in the early '80s, and I took a course 10 00:00:28,360 --> 00:00:32,259 on artificial intelligence in the philosophy department. 11 00:00:32,259 --> 00:00:35,140 So that got me familiar with AI, and I was always 12 00:00:35,140 --> 00:00:39,550 interested in human thought processes and cognition. 13 00:00:39,550 --> 00:00:42,630 And so after I finished my masters in physics in Holland, 14 00:00:42,630 --> 00:00:44,890 I got a scholarship from the Canadian government, 15 00:00:44,890 --> 00:00:45,640 and I studied-- 16 00:00:45,640 --> 00:00:49,190 I went to Toronto and did my masters and PhD 17 00:00:49,190 --> 00:00:52,607 in artificial intelligence. 18 00:00:52,607 --> 00:00:54,940 What kind of problems were you working on when you first 19 00:00:54,940 --> 00:00:55,850 entered the field? 20 00:00:55,850 --> 00:00:57,267 So when I first entered the field, 21 00:00:57,267 --> 00:00:58,940 I did a masters on actually connections 22 00:00:58,940 --> 00:01:00,710 to natural language understanding, 23 00:01:00,710 --> 00:01:01,780 so neural networks. 24 00:01:01,780 --> 00:01:06,580 And at the end of that, actually, Hector Levesque 25 00:01:06,580 --> 00:01:11,110 came to Toronto, and he got me interested in reasoning 26 00:01:11,110 --> 00:01:14,260 and complexity issues, conditional complexity issues 27 00:01:14,260 --> 00:01:15,760 in artificial intelligence. 28 00:01:15,760 --> 00:01:18,820 And that's what I started working on then. 29 00:01:18,820 --> 00:01:21,350 What have you worked on since? 30 00:01:21,350 --> 00:01:24,730 So my work started-- 31 00:01:24,730 --> 00:01:28,632 in the mid '80s, there was a movement to study-- 32 00:01:28,632 --> 00:01:30,340 people started to realize a lot of things 33 00:01:30,340 --> 00:01:32,298 we were trying to do in artificial intelligence 34 00:01:32,298 --> 00:01:34,585 were computationally intractable and complete, 35 00:01:34,585 --> 00:01:36,490 basically complete. 36 00:01:36,490 --> 00:01:41,780 And there was a whole effort in identifying what could 37 00:01:41,780 --> 00:01:43,030 be done in a tractable manner. 38 00:01:43,030 --> 00:01:47,050 So people defined restricted representation languages, 39 00:01:47,050 --> 00:01:51,610 restricted emotionalization languages, and restricted forms 40 00:01:51,610 --> 00:01:53,660 of default reasoning. 41 00:01:53,660 --> 00:01:55,420 So this was fairly formal work, where 42 00:01:55,420 --> 00:01:59,770 you would prove properties about languages and about inference 43 00:01:59,770 --> 00:02:03,490 and try to find tractable classes. 44 00:02:03,490 --> 00:02:06,140 And that was my PhD work. 45 00:02:06,140 --> 00:02:11,410 And then I finished 1990, and I went to the laboratories 46 00:02:11,410 --> 00:02:13,410 and actually started working with Henry Kautz. 47 00:02:13,410 --> 00:02:16,310 48 00:02:16,310 --> 00:02:19,000 Do you have any fun stories or anecdotes 49 00:02:19,000 --> 00:02:20,590 you liked from Toronto? 50 00:02:20,590 --> 00:02:23,230 From Toronto? 51 00:02:23,230 --> 00:02:26,100 Fun stories. 52 00:02:26,100 --> 00:02:28,230 No, it was just a very exciting time. 53 00:02:28,230 --> 00:02:31,070 I think, with the Canadian government-- it was 54 00:02:31,070 --> 00:02:33,830 a private foundation, I think. 55 00:02:33,830 --> 00:02:37,450 They brought back a number of prominent researchers, 56 00:02:37,450 --> 00:02:42,530 kind of an answer of a brain drain going to the US. 57 00:02:42,530 --> 00:02:47,810 So they set up these chairs in artificial intelligence, 58 00:02:47,810 --> 00:02:52,100 and so they brought back Hector Levesque. 59 00:02:52,100 --> 00:02:54,560 Actually, Geoff Hinton came later on, 60 00:02:54,560 --> 00:02:58,490 and he attracted some big names to Toronto. 61 00:02:58,490 --> 00:03:01,440 So it was an exciting time, and they didn't have to teach much, 62 00:03:01,440 --> 00:03:05,150 so we could spend all our time on research. 63 00:03:05,150 --> 00:03:06,200 So it was a very-- 64 00:03:06,200 --> 00:03:09,320 these were the highlights, the high points 65 00:03:09,320 --> 00:03:10,730 of artificial intelligence. 66 00:03:10,730 --> 00:03:17,120 I remember, I think it was the '84 and '85 IJCAI 67 00:03:17,120 --> 00:03:20,420 when Hector won the Computer Thought Award. 68 00:03:20,420 --> 00:03:22,750 I think he gave a talk for 5,000 people, 69 00:03:22,750 --> 00:03:26,348 and that was quite something in the hall. 70 00:03:26,348 --> 00:03:28,640 This was in the same place where they had the Olympics, 71 00:03:28,640 --> 00:03:30,320 I guess, the year before. 72 00:03:30,320 --> 00:03:32,600 The whole UCLA campus was full, and this 73 00:03:32,600 --> 00:03:36,340 was a lot of press and a lot of hype basically about AI. 74 00:03:36,340 --> 00:03:38,810 So it was the high point. 75 00:03:38,810 --> 00:03:39,693 Cool. 76 00:03:39,693 --> 00:03:42,110 So I've seen all the ups and downs, I guess, on the field. 77 00:03:42,110 --> 00:03:44,930 And it's interesting how the field now is in an upswing 78 00:03:44,930 --> 00:03:46,480 again. 79 00:03:46,480 --> 00:03:48,320 So arriving in Toronto when there were all 80 00:03:48,320 --> 00:03:54,200 these other real artificial intelligence minds coming, 81 00:03:54,200 --> 00:03:56,400 where did you guys see artificial intelligence going 82 00:03:56,400 --> 00:03:59,045 in the next 20 years or so from then? 83 00:03:59,045 --> 00:04:01,970 It was moving fast. 84 00:04:01,970 --> 00:04:05,330 There was a strong belief at that point in sort of logical 85 00:04:05,330 --> 00:04:10,130 and the reasoning-based approaches and people 86 00:04:10,130 --> 00:04:14,130 and cognitive reasoning, and those kind of areas. 87 00:04:14,130 --> 00:04:16,610 Geoff Hinton, of course, the neural network perspective 88 00:04:16,610 --> 00:04:18,890 and the learning perspective. 89 00:04:18,890 --> 00:04:22,520 I think sort of what people realized in the next decade, 90 00:04:22,520 --> 00:04:26,300 roughly from '85 to '95, is the limitations of some 91 00:04:26,300 --> 00:04:30,790 of these approaches and that it wasn't as easy as we thought. 92 00:04:30,790 --> 00:04:33,110 And so people went back to the basics, 93 00:04:33,110 --> 00:04:36,840 again, trying to fix some of these problems. 94 00:04:36,840 --> 00:04:38,215 So it's all very hard to predict. 95 00:04:38,215 --> 00:04:40,048 I think one of the key things in artificial, 96 00:04:40,048 --> 00:04:42,870 it's very hard to predict where the next breakthrough will be. 97 00:04:42,870 --> 00:04:43,370 Sure. 98 00:04:43,370 --> 00:04:46,160 And I'm sure we're actually going again at current times. 99 00:04:46,160 --> 00:04:48,500 But those people always try. 100 00:04:48,500 --> 00:04:50,570 And I think what brought us all together is 101 00:04:50,570 --> 00:04:53,070 sort of we come to science with a desire 102 00:04:53,070 --> 00:04:55,910 to figure out thinking, how people think, 103 00:04:55,910 --> 00:04:59,390 and understanding mental images, how people manipulate 104 00:04:59,390 --> 00:05:00,775 mental images to do thinking. 105 00:05:00,775 --> 00:05:04,080 106 00:05:04,080 --> 00:05:06,860 But at that point, it was a very optimistic period. 107 00:05:06,860 --> 00:05:10,520 And I think that went away for a while. 108 00:05:10,520 --> 00:05:12,170 And now, it's coming back. 109 00:05:12,170 --> 00:05:14,510 I was going to ask if you've done any work 110 00:05:14,510 --> 00:05:17,540 in computational neuroscience. 111 00:05:17,540 --> 00:05:20,390 Also, for my master's, not literally neuroscience, 112 00:05:20,390 --> 00:05:24,020 but we did build a parser that was actually using a Boltzmann 113 00:05:24,020 --> 00:05:26,690 machine to disambiguate. 114 00:05:26,690 --> 00:05:28,430 You give a little input sentence, 115 00:05:28,430 --> 00:05:30,460 and then it would settle on one or two. 116 00:05:30,460 --> 00:05:33,090 If it had multiple parses, it would settle 117 00:05:33,090 --> 00:05:34,970 on possible parsing states. 118 00:05:34,970 --> 00:05:35,480 I see. 119 00:05:35,480 --> 00:05:39,700 And on small signs and basic examples, it worked. 120 00:05:39,700 --> 00:05:40,700 So that was sort of fun. 121 00:05:40,700 --> 00:05:42,080 OK. 122 00:05:42,080 --> 00:05:43,340 That's cool. 123 00:05:43,340 --> 00:05:44,900 What are you working on these days? 124 00:05:44,900 --> 00:05:47,430 So basically, when I went to Bell Labs, 125 00:05:47,430 --> 00:05:49,020 we were sort of in front of it. 126 00:05:49,020 --> 00:05:52,070 That was actually the hey day for Bell Labs. 127 00:05:52,070 --> 00:05:54,770 Ron Brachman started the artificial intelligence center 128 00:05:54,770 --> 00:05:55,370 there. 129 00:05:55,370 --> 00:05:58,960 And many, many very good people went there 130 00:05:58,960 --> 00:06:01,940 in a short period of time, Bell Labs. 131 00:06:01,940 --> 00:06:02,660 Arno Penzias. 132 00:06:02,660 --> 00:06:06,080 I remember we presented that work to the Nobel Prize winner 133 00:06:06,080 --> 00:06:08,320 in charge of Bell Labs, Arno Penzias. 134 00:06:08,320 --> 00:06:11,750 And he liked AI and wanted to see sort of the latest things. 135 00:06:11,750 --> 00:06:13,430 I remember him showing-- 136 00:06:13,430 --> 00:06:16,020 we had very stochastic search algorithms, 137 00:06:16,020 --> 00:06:19,190 and we did sort of standard deterministic search of them 138 00:06:19,190 --> 00:06:21,200 for logical inference. 139 00:06:21,200 --> 00:06:24,135 We actually did stochastic local search management 140 00:06:24,135 --> 00:06:25,220 for inference. 141 00:06:25,220 --> 00:06:27,890 And he could solve very large problems. 142 00:06:27,890 --> 00:06:30,880 And we would show it to him, and he would take a little video. 143 00:06:30,880 --> 00:06:33,410 We had a little graphical demonstration of that. 144 00:06:33,410 --> 00:06:36,430 And we would show the general audience to understand science. 145 00:06:36,430 --> 00:06:39,340 And so that was an exciting time. 146 00:06:39,340 --> 00:06:41,740 And I shifted sort of my emphasis 147 00:06:41,740 --> 00:06:46,893 from looking at restricted formalisms and [INAUDIBLE] 148 00:06:46,893 --> 00:06:48,310 and started working on [INAUDIBLE] 149 00:06:48,310 --> 00:06:49,980 justafiability testing. 150 00:06:49,980 --> 00:06:52,680 So it was propositional reasoning. 151 00:06:52,680 --> 00:06:56,202 And that's what I have focused most on since then. 152 00:06:56,202 --> 00:06:58,035 And there, we have made tremendous progress. 153 00:06:58,035 --> 00:07:00,220 So from a few hundred variable problems 154 00:07:00,220 --> 00:07:03,940 in the early '90s to now a million variable problems, five 155 00:07:03,940 --> 00:07:06,010 million constraint problems. 156 00:07:06,010 --> 00:07:08,080 And the area has nicely branched out. 157 00:07:08,080 --> 00:07:11,122 So now, my interest is beyond. 158 00:07:11,122 --> 00:07:13,330 I work with people in hardware verification, software 159 00:07:13,330 --> 00:07:16,090 verification, because there these tools are really 160 00:07:16,090 --> 00:07:18,400 used to support hardware and software design. 161 00:07:18,400 --> 00:07:20,440 So I sort of see your reasoning now, 162 00:07:20,440 --> 00:07:23,030 as sort of augmenting the human reasoning. 163 00:07:23,030 --> 00:07:25,190 So human designers, engineers. 164 00:07:25,190 --> 00:07:27,250 And I think that that's sort of where 165 00:07:27,250 --> 00:07:29,190 I see the future, one part of the future of AI 166 00:07:29,190 --> 00:07:32,360 going to be, I think, augmenting human intelligence 167 00:07:32,360 --> 00:07:35,260 and actually strengthening it in many ways. 168 00:07:35,260 --> 00:07:39,960 Where do you see your work specifically, your season, so 169 00:07:39,960 --> 00:07:41,300 the next progression? 170 00:07:41,300 --> 00:07:44,230 So that's sort of where I've always been interested. 171 00:07:44,230 --> 00:07:46,420 For example, mathematical discoveries 172 00:07:46,420 --> 00:07:47,980 and mathematical theorem proving. 173 00:07:47,980 --> 00:07:50,020 And so I work with some people at Cornell 174 00:07:50,020 --> 00:07:53,120 who have actually studied very sophisticated 175 00:07:53,120 --> 00:07:56,115 theorem-proving systems. 176 00:07:56,115 --> 00:07:58,240 The search space is too large for actually doing it 177 00:07:58,240 --> 00:07:58,970 automatically. 178 00:07:58,970 --> 00:08:03,070 So basically, they reformulate proofs by mathematicians 179 00:08:03,070 --> 00:08:05,532 and then verify them automatically. 180 00:08:05,532 --> 00:08:07,240 But what we now want to do is we actually 181 00:08:07,240 --> 00:08:10,390 want to have these proofs automatically discovered. 182 00:08:10,390 --> 00:08:12,160 So the one thing we're seeing is that I 183 00:08:12,160 --> 00:08:15,190 think computers will do a slightly different kind of math 184 00:08:15,190 --> 00:08:17,360 than humans do. 185 00:08:17,360 --> 00:08:19,210 But that can be nicely complementary. 186 00:08:19,210 --> 00:08:21,400 So you can conjecture some things 187 00:08:21,400 --> 00:08:24,730 about mathematical objects, and a computer can be used 188 00:08:24,730 --> 00:08:28,578 and hopefully will be used to verify these properties 189 00:08:28,578 --> 00:08:29,870 in a reasonable amount of time. 190 00:08:29,870 --> 00:08:32,110 And that's sort of an interactive setup 191 00:08:32,110 --> 00:08:34,360 that could actually help the human scientist. 192 00:08:34,360 --> 00:08:37,627 Can you go into a little more detail about that? 193 00:08:37,627 --> 00:08:38,210 So basically-- 194 00:08:38,210 --> 00:08:39,653 The differences. 195 00:08:39,653 --> 00:08:41,320 Oh, the differences in what they can do. 196 00:08:41,320 --> 00:08:43,690 Well, I think it's clearest in something like chip 197 00:08:43,690 --> 00:08:45,880 design or software design. 198 00:08:45,880 --> 00:08:47,440 And chip design is already used. 199 00:08:47,440 --> 00:08:49,510 So the chip designer has sort of a varying thing, 200 00:08:49,510 --> 00:08:52,960 as a human, fairly good creative insights of what 201 00:08:52,960 --> 00:08:55,300 a reasonably good design is. 202 00:08:55,300 --> 00:08:59,570 However, when you use millions of gates, 203 00:08:59,570 --> 00:09:03,640 millions of components, subtle interactions you may miss. 204 00:09:03,640 --> 00:09:07,000 So you may want to know, can this register ever overflow? 205 00:09:07,000 --> 00:09:08,620 And so what the chip designers do 206 00:09:08,620 --> 00:09:10,400 is then they actually send that query. 207 00:09:10,400 --> 00:09:12,400 It's actually going to be translated 208 00:09:12,400 --> 00:09:14,890 by [INAUDIBLE] techniques, [INAUDIBLE] liability problem 209 00:09:14,890 --> 00:09:17,245 and ask the computer, can you check whether this really 210 00:09:17,245 --> 00:09:19,150 will work if I put it together? 211 00:09:19,150 --> 00:09:21,340 And I sort of, what I would like to see 212 00:09:21,340 --> 00:09:24,880 is an application like this, and that's 213 00:09:24,880 --> 00:09:27,380 what we're starting to work on in, for example, mathematics. 214 00:09:27,380 --> 00:09:30,790 So we can say, I think, that this observation 215 00:09:30,790 --> 00:09:32,350 might have certain properties. 216 00:09:32,350 --> 00:09:36,110 Can you check whether this property really whole or not? 217 00:09:36,110 --> 00:09:36,610 Yes. 218 00:09:36,610 --> 00:09:37,360 Wow. 219 00:09:37,360 --> 00:09:39,430 And I think this is becoming possible. 220 00:09:39,430 --> 00:09:43,645 So that's in the next 5 to 20 years. 221 00:09:43,645 --> 00:09:45,645 What do you think about the new sort of emphasis 222 00:09:45,645 --> 00:09:49,530 on tying artificial intelligence into sort 223 00:09:49,530 --> 00:09:53,660 of biology, nanotechnology, stuff in that direction? 224 00:09:53,660 --> 00:09:54,160 Yeah. 225 00:09:54,160 --> 00:09:56,040 So those, I think, are interesting direction. 226 00:09:56,040 --> 00:09:59,500 I mean, my own work, it has, for example, connections 227 00:09:59,500 --> 00:10:00,510 to statistical physics. 228 00:10:00,510 --> 00:10:05,710 So I think a big change in the last 10, 15 years 229 00:10:05,710 --> 00:10:07,540 is the sophistication of techniques. 230 00:10:07,540 --> 00:10:09,040 So we're looking at other fields, 231 00:10:09,040 --> 00:10:13,270 like statistics, probability, biology, 232 00:10:13,270 --> 00:10:15,490 where we reach out to other sciences. 233 00:10:15,490 --> 00:10:17,110 And I think that can be very fruitful. 234 00:10:17,110 --> 00:10:19,090 So I'm mostly really working as a physicist, 235 00:10:19,090 --> 00:10:22,330 a physical physicist, who actually have come up 236 00:10:22,330 --> 00:10:25,160 with new kinds of [INAUDIBLE] truth and dare techniques. 237 00:10:25,160 --> 00:10:27,370 So I think that outreach to the community 238 00:10:27,370 --> 00:10:30,130 is going to be very important, and that I 239 00:10:30,130 --> 00:10:33,746 think will accelerate. 240 00:10:33,746 --> 00:10:37,090 So I guess we have time for one final question? 241 00:10:37,090 --> 00:10:38,470 Besides your own work, which-- 242 00:10:38,470 --> 00:10:40,310 because you're working on, is obviously 243 00:10:40,310 --> 00:10:42,940 the coolest thing in the field, what 244 00:10:42,940 --> 00:10:44,470 do you think the next coolest thing 245 00:10:44,470 --> 00:10:46,450 is that's being worked on right now? 246 00:10:46,450 --> 00:10:52,270 I think the whole integration with sensors and data. 247 00:10:52,270 --> 00:10:56,800 So the fact that we have so much more data available 248 00:10:56,800 --> 00:11:00,910 makes these statistical methods and the statistical approaches 249 00:11:00,910 --> 00:11:03,490 more promising. 250 00:11:03,490 --> 00:11:06,000 Similarly, we mine data on the web. 251 00:11:06,000 --> 00:11:09,250 I think that might actually help at this moment. 252 00:11:09,250 --> 00:11:11,980 Many years ago, sort of common-sense reasoning 253 00:11:11,980 --> 00:11:14,110 may become more feasible by actually going back 254 00:11:14,110 --> 00:11:15,978 to the web [INAUDIBLE]. 255 00:11:15,978 --> 00:11:16,850 Cool. 256 00:11:16,850 --> 00:11:18,120 [INAUDIBLE] on the web. 257 00:11:18,120 --> 00:11:18,990 Makes sense. 258 00:11:18,990 --> 00:11:20,820 Unfortunately, we have to wrap things up. 259 00:11:20,820 --> 00:11:22,020 But thanks a lot for talking to me. 260 00:11:22,020 --> 00:11:22,250 Thanks. 261 00:11:22,250 --> 00:11:22,600 I really enjoyed it. 262 00:11:22,600 --> 00:11:23,517 It's good to meet you. 263 00:11:23,517 --> 00:11:24,800 OK. 264 00:11:24,800 --> 00:11:27,000