0:09 all right let's uh come to order and uh i'd like to introduce this panel um my 0:17 name is george i used to work here i don't anymore sometimes i wish i did 0:23 and so ed gave me this job to introduce our last panel of the afternoon before the group photo 0:29 and i can assure you it's going to be a really remarkable one it's on as ed mentioned at the break it's on 0:36 brain computer interfaces and i'm desperately in need of one 0:42 and uh i'm hoping this is gonna get me up a little bit uh uh and give me a leg up on the on the waiting list 0:49 but the two people who are talking about this i've heard them both talk before uh and they are fantastic 0:58 jim giordano came to the naval war college when i was filling in for my friend and colleague martin cook here as 1:04 the stockdale professor and he gave a lecture in our emerging 1:09 ethics emerging technology program that was probably the single best lecture 1:14 i've ever heard in terms of being exciting knowledgeable articulate 1:20 and frankly terrifying he really scared my socks off and i 1:25 think he will scare yours off as well if he goes in that particular direction the way these 1:31 technologies can often take us he's professor as many of you know at georgetown in the department of 1:37 neurology and biochemistry and he has he's running the pellegrino program in 1:44 medical ethics in the center for bioethics he's written over 300 papers 1:51 seven books and 20 government white papers the hardest of all if 1:57 you've ever composed a white paper uh it's a terribly difficult long and 2:04 tedious task but very important 20 of them and he's editor-in-chief of the international journal 2:11 known as philosophy ethics and humanities and medicine our other speaker second speaker is dr 2:18 nick evans and i heard nick speak and got to meet him when he was at the center for 2:26 cappy we called the center for applied ethics and public philosophy 2:31 public ethics at in in canberra australia 2:37 uh australian national university a conference that he helped to organize uh and i've followed his career since as 2:43 he moved to the states married and he's now associate professor uh with tenure 2:48 right at the university of massachusetts lowell and he does work in ethics emerging 2:55 technologies and national security issues um 3:00 he has written a book that you really should get and 3:07 contemplate very carefully the ethics of neuroscience and national security which 3:13 is really the theme of our panel at this point my pleasure to turn it over to you gentlemen jim first thank you much thank 3:20 you [Applause] the slide controller is here 3:30 there we go okay so i'll be talking to you about brain machine interfacing this is the last 3:36 panel of the day and what they've done is taken two most interesting accents that you could possibly hear on the single podium put them together 3:42 don't confuse us with a speech impediment this is a new york city accent i've been out of the city since 86 this is the best i can give you so if 3:48 anyone comes from the city you can work as a translator and nick could talk about how to tie your shoes and it sounds eloquent so i mean the 3:54 combination is going to be dynamic i'm sure so earlier you heard well this is going to be a sort of now for something 3:59 completely different no stop break break it's not at all because you saw neurons illustrated in 4:05 what's called a neural net system the question is those aren't real neurons those are computational units of 4:11 hardware the fact that you're able to perceive that is because you've got no real neurons 4:17 the question is how do we intersect the real neurons with the neurons you saw earlier 4:22 and what is the reciprocity of the nature of those interfaces as both that technology increases and becomes more 4:29 ubiquitous and therefore it prompts us to not only consider using those technologies but 4:36 generate concerns over the ways we can the way we do in the way our adversaries are already doing so 4:42 so in terms of disclaimers and so forth if i can get this to work which of course i cannot there we go the 4:48 usual disclaimer like dr evans my work has been funded over the past several years by a number 4:53 of governmental entities and institutions nothing that i'm saying here necessarily reflects their position on this although 5:00 much of the work was done with under their auspices as well the construct here is very very simple ladies and 5:05 gentlemen it's simply this we can zap the brain and modify the mind let me tell you how that goes 5:11 we simply don't know how mind occurs in brain i've been a brain scientist for 42 years this year 5:16 i know that's a long time i i look in the mirror and i go how did that happen 5:21 but a lot of the stuff that i learned when i was in school as a neuroscientist and as a neuropathologist does not 5:26 obtain any longer there's an interesting dance that happens between the tools that we have at our disposal and the theories we 5:33 create and then pressing the limits and the envelopes of those theories through ever new tools that we then create 5:40 both heuristically and practically and the issue there is are we ready for 5:45 ai is human intelligence ready for ai are humans intelligent enough to engage 5:53 with ai and how will we design those systems to interface with humans 5:59 so essentially what we see is the brain computational the brain machine interface writ large 6:05 and if we look at discrete domains of where these bmis or bcis create new scientific and technological innovation 6:11 intersections they can be parsed into three categorical domains number one those that are used to assess the 6:18 structure and function of the brain and understand that the underlying paradigm is that it's not just a brain that's a 6:23 bunch of squirty squiggly cells this is a brain that is embodied in an individual who is embedded in an ecology 6:30 and that ecology has a context and the context is the military operator the intelligence operator 6:36 either the trigger puller or the individual who is in some way contributing to the force readiness and 6:42 preparedness of that military but that's a human organism and the question is how can we keep that 6:47 human organism functional in that particular contextual job one way is to understand how their brain works and if 6:54 we can understand how the brain works on a variety of levels we may be able to sidestep the proverbial hard question of 7:01 the neurosciences how does brain incurred mind this is proverbial i mean realistically 7:07 if we have a large enough pattern repository of data and we can use those data we can make maps 7:13 we don't necessarily have to know how it happens we just have to know where because those maps are targetable which 7:19 then brings us to the next phase of brain machine interfacing which is interventional brain machine interfacing 7:25 and these run the gamut some things we can do peripherally that are adonable and doffable that can hook up to our physiology somatically and therefore 7:32 affect our nervous system bottom up things that are adonable and doffable that we can put into headsets helmets 7:38 wearables semi-wearables and ever more those things that are indwelling and that need 7:45 not necessarily be a surgical implantation because state of the art now is for next generation non-surgical 7:52 neuromodulation inhalables ingestibles deliverables via very very small scale 7:59 units at the nanoscale that can then be segued to where they have to go in the brain and form literally vast arrays of 8:06 sensors and transmitters to be able to real time and remotely read from the living brain and right 8:12 into the living brain that sounds like sci-fi doesn't it it's not this is a project that darpa that i have the honor 8:18 and privilege of working on called the n-cube project with my colleague alemandi 8:24 this is a heilmeier catechism for 60 calendar month evolution so in 60 calendar months we will be 8:30 there why is that important because our trans-pacific near pure competitor will be there in 42 months 8:38 and has already made inroads for a number of reasons not least of which is because there are different cultural systems there different philosophies 8:44 there and different ethics there that have very different permissivities and constraints but more and more we're intersecting and 8:50 interacting with our trans-pacific and transatlantic pure competitors kum adversaries and the question is going to 8:55 be in what ways cooperative ways competitive ways combative ways and what does that then 9:02 portend for the new battle scape of the 21st century which is here 9:08 all those things we've seen this morning about what ai and machine learning does it's to augment 9:14 the human system in some way with information with awareness with sensibilities sensitivities 9:20 with morality from the machine from us having a human in the loop or on the loop and that is not a universal consideration 9:27 but more than that what ai allows is the utility of the brain sciences to be able to take massive amounts of discrete data 9:33 and integrate those into ways that are operative for making decisions about 9:38 assessing what the brain and the individual whose brain it is is doing what their intentions are what their 9:44 directions are and what their behaviors will be the paradigm that we use is one called 9:50 neuro hope and it's the exact same paradigm that fits ai to the human operator across a variety of levels keep 9:57 the human operator healthy make them operationally protected and enable them 10:02 to do certain things what mission specific things the better they can do those things the more they're going to 10:07 be operationally protected the more operationally protected they're going to be the more survivable they are the more 10:13 survivable they are the better enablement pushes the window open ever 10:19 further on human optimization by the ingrained interaction of human and machine systems 10:26 recognizing that the brain is not a computer per se it is a computational set of units that can be augmented by 10:33 reciprocal ongoing interaction with machine learning systems nai in a systematic approach that our group has 10:40 referred to in 2011 as nur int neurological intel for those of you who are interested i provide a reference at 10:46 the end it augments sigint comment humant in those ways that provides reciprocal 10:51 learning between the machine system and the human systems human systems 10:57 but more and more we're finding is we're using these brain technologies not right of bank to fix it when it goes boom but 11:04 left the bank to make it better so that we can make the human operator more capable 11:10 more cognitively resistant more resilient and more recuperative 11:16 but we're not alone in this enterprise sir i've had the pleasure of working with our european colleagues on the 11:23 european union and brain project that has created particular guidelines in terms of those ways that the brain 11:28 sciences together with artificial intelligence and machine learning can should or should not be used in national 11:34 security intelligence and defense context or in the european parlance wins warfare intelligence and national 11:40 security but we'd like to think that our international economic and military 11:45 allies are singing off a common rule book if you will but that's not universal 11:51 our trans-pacific competitors advocate the fact that there is a long-standing culture that has different 11:57 needs different values different philosophies ergo different ethos and views human-machine interactions and 12:04 human rights quite differently who are we to tell them how to treat their culture how dare you 12:09 come into our living room and tell us how to decorate and the problem really is that they can do things that we can't but more than that 12:16 those capabilities are inviting and actively soliciting research tourism and medical tourism so the reality of 12:24 putting devices into and onto the living brain at least medically if not for lifestyle 12:30 purposes ergo neural link that is an entrepreneurial venture not one necessarily bounded by biomedical 12:37 ethics becomes something to consider globally in terms of how that affects global 12:42 biodefense and biosecurity so what are the ethical issues we deal with well we've heard the ethical issues 12:49 that go along with ai and many of these are similar however we can parse them once again into two categorical 12:55 distinctions those that are focal to the technologies themselves new technologies we haven't been dropping these things 13:00 into brains for 30 plus years so we simply really don't know what the technological interface will be over 13:06 time point number one point number two my lead statement we have no idea how the brain works 13:12 the only way we'll know is if we go when we do more of these particular types of technologies but what you're really doing is you're shining the light 13:18 directly in front of you before you step and in some cases you're signing it behind you you're using the lamp post 13:23 the way a drunk does first to navigate to the post and then to lean against it the question is what happens in the 13:28 intersection of unknowns and what does that mean because there will be unknowns some of them in the 13:35 parlance you may governmentally be familiar with unknown unknowns and are we ready for those 13:40 run away in anticipated effects and not just run away and unanticipated effects from the technology 13:46 social runaway effects cultural runaway effects collective and community runaway effects i want this stuff 13:53 why because it makes me healthier more operationally protected more enabled 13:59 but for how long and what do we do with those who we enabled and what do we do with those who are not enabled and this brings us into 14:06 those social domains where ethics then interfaces those things that are socially and culturally relevant and we 14:12 see these here before you is there some inviolability of the mental space what the philosopher and 14:17 cognitive scientist rita farahani refers to as cognitive liberty 14:23 i mean we're really going into the substrate of the thing that makes us us the essence of self if you will i don't 14:29 care how you disguise it what we are doing is we are scanning the brain and from that literally interpretively 14:35 reading its functions which are mind and we are then utilizing these capabilities to then back intersect with that brain 14:41 and therefore control brain structure and function ergo mind i don't care what kind of game you play it's mind reading 14:48 and mind control blatantly so the question is should we 14:54 and if the answer is no what will we do about our competitors globally who already have 15:00 which then brings us to these other issues not just the autonomy issues that go along with this but certainly those 15:05 issues of perhaps justice who gets the goodies not only a question of them whoever they 15:12 may be but on our side how do we decide who becomes healthier 15:17 operationally protected enabled enhanced and to what extent do we take that optimization where are the limits where 15:24 are the boundaries and ultimately what does this then mean for informed consent when we can only provide information up 15:29 to what we know thus far and are there contingencies that go along with this and what does that then 15:35 mean for things like perhaps coercion implicit or explicit coercion inclusive 15:40 of social coercion on a grand and global scale based upon the fact that our international peer competitors already 15:47 have a foot in this pond there are additional issues as well these we call neuroethical legal and 15:53 social issues and i have them for you here i usually thumb my nose at anyone who reads a slide you can read this pretty well for yourself but one of the 15:59 major issues is if we decide to go there and we're already in if you will the 16:05 foyer the anteroom of these capabilizations of linking humans to machine in real neural engineering 16:10 cyborgization how do we prevent against obsolescence 16:16 who gets version one who gets version two who gets version three can you sort of trade up i don't know how many of you 16:21 have tried to trade up your cell phone but that ain't easy what are you going to do the thing that lives in your head 16:27 and does that also segue or silo individuals into categoricals not only within the military but beyond and if 16:34 you turn the device off do you then get something that our group identified 11 years ago called peds post 16:42 enhancement distress syndrome and what do you do with that is that an illness 16:47 how do you treat that illness do you give them back the thing that was the enhancement now the enhancement becomes a treatment 16:53 and is there going to be that level of continuity whereby the research and care is going to be available for these 16:59 individuals when they do what we've advocated over the past 12 years 17:04 is something called the operational neurotechnology risk assessment and mitigation paradigm that grew out of our 17:09 initial work in the heilmeier catechism with darpa from those projects spawned by the brain initiative and hear a deep 17:15 nod of homage to my colleague former lieutenant colonel william k spear u.s air force who stood up an elsi panel basically it 17:21 was a moral legal advisory panel that was very proactive and the way we looked at this is this was a super speedway 17:28 think of all the analogies of a formula one race they all obtain lots of technology and everybody 17:34 tinkering with technology to be the fastest on the track multiple entries per group very very fast pace not 17:40 without danger not only on the track but to those who are spectators and of course the technology is therefore 17:46 translatable to staking shareholders off the track everybody wants their brand new corvette 17:52 or their new truck to have the engineering and design stuff that was going around le mans 17:58 no different here just take a look at elon musk's neural link 18:03 so what do we do with this if you're going to get on that track you have to be able to engage risk assessment when you're on the on-ramp 18:10 see how that works operational neurotechnology risk assessment mitigation paradigm begins 18:16 with six r's responsibility for realistic assessment of the neurotech what can it really do 18:22 we don't need pie in the sky chicken little cry wolf stuff science fiction is great i love it but not here let's not spend our money on 18:28 that research evaluating real uses in practice both preemptively enduring 18:34 responsivity to burdens risks and harms recognizing the burdens are always going to be greater than the benefits because 18:40 the benefits of the low hanging fruit and most proximate the more something is out there the more burdensome it tends to become because the more diversity and 18:47 fractal diffusion and its viable use are misused revisions in the tech and the way we talk about it and then obviously this 18:54 has to be based upon key questions what are those key questions the six w's as you see here 19:00 what neuroscience and technology we considering why is it being considered for use 19:06 who will receive or not receive the tech when will it be considered in terms of an algorithm where early on later on 19:13 must there be failures first and then these be compensatory for those failures are those failures operational failures 19:18 military failures etc where will it be done field hospitals clinics in the field 19:26 and then ultimately which mechanisms are going to be in place for the continuity of research and or provision of care 19:32 that we feel are so important for these next set of framing contingencies the six sees what are the actual capacities 19:40 and capabilities of this thing as it exists not in rdt and e but at its tech readiness level for deployment 19:47 the ecological validity of this thing in the field not only idiosyncratically but systemically in terms of risk 19:53 what are the consequences both of using this and not using it 19:59 consequences omission and commission what is the character of the research and what is the character of the 20:05 application and what does that then do to the character of the user share or stakeholder 20:11 beyond that what are the context of value we're talking about military ethics must be about the effort and this is about the military but an open 20:17 liberal democracy like our own there's a relative transparency that must be upheld and characteristically what tends 20:23 to happen is that our military personnel transfer back into the civilian world are we ready with terms of our civic 20:29 institutions to uptake those individuals and care for them which then leads to the contingencies for consent 20:36 at very very least we must tell them yes there's going to be ongoing research or no there's not you want to be a neural naught and bravely go neurologically 20:42 where no one has gone before god bless you captain kirk but there's no going to be any wire of net under the 20:48 wire you walk you're on your own and i got to tell you i worked at socom for a long time 20:53 and you and i both know that there are socom men and women who lick the sweat off a pig if it's going to make them lot 20:59 more operationally viable and for many of them they're like it's okay give it to me 21:04 but then what do we do with the va what do we do with families what do we do the social responsibility to the polis in an 21:09 open society where the military in fact serves the police and what does that mean 21:14 do i have answers no i mean i have a paradigm that we used over the past decade to be able to set up the risk 21:20 assessment orientation that must be fitted to some ethos but here too but ethos 21:26 as my colleague dr evans will tell you this is a major issue civilian ethos 21:31 military ethos s t ethos our ethos or the competitive ethos of 21:36 the global market but what's absolutely necessary ladies and gentlemen is to cash the reality 21:42 check no bull what can the neuroscience and tech do in combination and in yoking with our 21:48 machine learning and ai systems and what does that mean the 21st century warfighter intel 21:54 operator and beyond what does it mean for individuals elsewhere in the world who may be so 22:00 unable that now may be part of the participatory populations in which our military populations and our military 22:06 collectives are engaged what does it mean for neuroethics we've talked about the need for certain new 22:12 ethical principles non-obsolescence self-creativity citizenship 22:17 and how these may comport for example with an understanding of cosmopolitan ethics that in some cases deconstrains 22:23 the use of these technologies and other cultures that we will likely face on a competitive if not combative battlescape 22:31 and then ultimately going forward what does that mean for those civic institutions in which we then engage our military 22:36 because the last thing in the world we want to do is use this type of science and technology to untangle the gordian knot 22:43 of the brain mind only to open a can of worms of its reality and actual use 22:51 in practice and so i leave you with this quick story then i'll hand it over to my 22:57 esteemed colleague my dad was a nautical engineer worked at american electric boat he was a plank holder on the uss nautilus for those of you at the naval 23:03 academy you know that is an esteemed a very esteemed vessel in naval history but as an engineer he liked to tinker 23:09 with stuff fix stuff and it was our father and son interaction when i was a kid he used to help me like build stuff 23:14 and i used to work in his workbench he was a good teacher mile man what he would do is the first tuesday of every month he'd bring home a new tool 23:21 and we'd spend that month using the tool and learn how to use it we started pulling this gig when i was about six years old i was the exact same 23:27 height then as i am now by the way and so i remember one day he came home i was about 10 years old and i was 23:32 downstairs working on the workbench on some project we were fiddling around with it was new tool tuesday my dad comes home i was like here's a new tool 23:39 and in my youthful impulsivity and thinking that i knew what i was doing i grabbed the tool i was like thanks dad was going to go barrel button downstairs 23:46 my dad put his hand on my shoulder he said whoa jim stop slow down with this one measure twice cut once 23:52 because sometimes you're not going to be able to take it back thanks dad 23:58 we need to measure twice before we make the cut and that measurement is not only on the scientific and technological side 24:05 that lens that lets us peer into the brain and into those machines that we can use to peer into the brain must be 24:10 turned back around to the mirror of what we are as an 24:17 as organizat as cooperatives and also what may be our responsibilities and challenges in 24:24 competition if you're interested in some of the work we did over the past decade and a half i provide these references for you here 24:30 i'll be happy to provide them for you if you want the handout or white papers as we mentioned here and if you're interested in getting 24:36 in touch with me that's where i live feel free get in touch james.georgetown.edu order right now and 24:41 get a free brain implant thank you very much ladies and gentlemen it's a pleasure [Applause] 24:53 so when you uh spoke about trans-pacific competitors i know if you were talking about but there was a moment where i was 25:00 like is he talking about me that's a new zealand 25:05 you can't trust new zealand um that's what we like to tell them because uh they're better than us um 25:12 so i'm gonna uh wait for my slides to to show up and just give my my real disclaimer not my funding disclaimer 25:18 which is that um i obviously have an accent and if i'm speaking a little too fast for you please shoot your hand up um i do have a 25:25 45-minute lecture on the differences in australian accents which i'm happy to give at the reception um i don't want to 25:31 spoil it for anyone but uh steve owens accent is not what you think um 25:37 can i get this up there we are all right very simple slides um 25:43 so to get started the actual where am i going am i getting anywhere there we are um so 25:51 i'm very fortunate to be funded by the us air force office of scientific research um through the minerva research 25:56 initiative i'm also funded as a greenwall foundation faculty scholar which is an independent bioethics 26:02 research funder in new york city obviously my views do not reflect those 26:08 of my funders and they're very glad for it and i'm very happy to say that this is a collaborative project uh between 26:14 myself and some very very excellent scholars including dr neil shortland who's my 26:19 co-pi and a forensic psychologist at the university of massachusetts lowell dr blake herith who is currently my 26:24 postdoc at umass lowell dr jonathan moreno who's a professor at the university of pennsylvania and dr 26:30 michael gross who i think may have single-handedly invented military medical ethics one day in 2004 26:37 at the university of haifa um so i am going to talk about science fiction um and that's because 21 years 26:43 ago i bought my copy of halo which for some of you will mean that 26:48 you're going to be disgusted at how young i am and for some of you you'll be disgusted at how old i am um 26:54 i want to come to this primarily because as as jim pointed out there's a lot of and i'll kind of get to kind of the 27:00 technical term for this there's a lot of hype around what brain into computer interfaces can and can't do 27:06 um and there was an article in nature this morning which estimates that there's approximately only 35 people 27:12 with an invasive full brain computer interface in their head right now right 27:17 that's not a lot for those of you who have done any biomedical research you can think of the power calculations 27:22 involved in getting any kind of clinical data out of 35 people it's not great 27:28 nonetheless we do have some really interesting things that we have learned and all of these are really within 27:35 civilian contexts and all of these are within the context of therapeutic uses 27:40 of bcis so right now what we're doing is we're getting people mobile in wheelchairs we're getting people the 27:46 ability to use uh text on computers without 27:52 needing to type because they can't move their arms or they can't easily reach a keyboard 27:57 this is what we do with bcis right is a therapeutic use but what i'm going to talk about is enhancement 28:03 because one that's something that the us military is very interested 28:09 in regarding bcis and have been for many many years the other reason and as a personal anecdote the funding that i 28:14 received currently from the us air force was originally an application to the national institutes of health i feel free to say this now that i have 28:20 tenure and no one can get rid of me i applied for an r01 i got a really good score we got to what's called just in 28:27 time which is when we're doing our irb and i got a call from a program officer and for those of you who work in the 28:32 federal government you'll know that any time someone from the federal government decides to give you a phone call and not send you an email it's because they 28:38 don't want to be fired and and he got on the phone and he said look 28:44 here's the problem we can't have the director of the nih explain to congress why the nih is 28:50 pursuing enhancement research and my response was what are you talking about i'm a philosopher i don't do 28:56 enhancement right i don't i don't enhance anyone i can barely enhance my students um and he said yeah but we can't be seen 29:03 to be doing enhancement research right so there is a clinical research ethics 29:08 program problem that i will describe to you in just a minute that the people who basically invented bioethics will not 29:15 touch for a bunch of interesting reasons and so not only is this a military problem but 29:21 the military may be the only group currently who are interested in this bioethics problem in substantive ways 29:29 and that's a real problem for the scholarship of bioethics well private foundations are also thankfully 29:34 interested in this but it is something that needs to be solved and currently the people arrayed 29:40 here are in a better place to do it than the people who do most of the clinical research 29:45 and fund most of the clinical research that goes on in the united states so 29:51 enhancing warfighters is not easy and one of the reasons that it's not easy is we don't actually know what works it's very hard sometimes to get that clinical 29:58 information and so what we have ahead of us when we think about something as complex and as wide reaching as a brain 30:04 computer interface is not just an ethical problem but a scientific problem how do we figure out 30:11 what works to a degree where we would be willing to put it in someone's head and then send them out into a kinetic 30:17 operation and hope that this thing for example doesn't chew their brain up the first time that a rush of adrenaline 30:23 hits their cystism right because they're being shot at um how do we and then how do we test that right and how do we 30:29 manage the risks and benefits of that in ways that don't sacrifice people needlessly but 30:35 also generate the right kinds of scientific information to get us where we need to go for these kinds of problems 30:42 so i want to make a real quick distinction because i wouldn't be a philosopher 30:47 without making a distinction at some point in my talk i want to distinguish between bcis brain compute 30:53 interfaces as use cases right so these are the operational ends of a bci right 31:00 at the moment these are therapeutic use cases an advanced prostheses the ability to communicate 31:06 but increasingly they will be enhancement end use cases so in 2015 a woman in pittsburgh who has 31:14 a brain computer interface she was hooked up to a flight simulator and she threw and she flew uh well she didn't 31:19 fly an f-35 she flew on f35 in simulation um and then she flew three um 31:25 all at once because it turns out that the parallel processing of the brain gives you the ability to do a lot of really interesting things at least in 31:30 simulation they weren't going to give other planes to to do things that we've never been able 31:35 to do before and we can distinguish between this and bci is a platform 31:41 right so bci as a thing that we put in your head that we don't necessarily know 31:47 what we're going to do with it yet and we may not know because we have a whole bunch of options and it may depend on 31:52 where you're deployed and how you're deployed and what you're deployed for but we might also increasingly just put it in your head because we know that 31:58 it's going to be needed at some point right when you train you train for a whole bunch of possible 32:06 operations and we don't just train you for one we train you to be able to do a whole bunch of stuff and we're not going 32:12 to tell you exactly what it is we're going to do with you over your career because we don't know yet so we might just 32:17 give you a chip right or as jim pointed out we might give you a pill that you 32:22 take that laces you with conductive materials that enable us to 32:28 then develop a brain computer interface inside you without uh the surgical option and of course that's preferable 32:35 for most people because having your skull cut open is a pretty traumatic process also i've been led to believe 32:42 um the use cases here just so we know uh obviously drone combat i know i've had one 32:48 conversation with with a person um here where you know we don't like to say the word drones sometimes 32:54 i do want to point out though that sometimes our etymology really matters right um and i will come back to this um 33:00 for those of you who don't know the etymology of uh robot um is from the czech term for 33:05 forced labor um you know sometimes we uh we just kind of passed that as the check term for slave um and that'll become 33:11 important in the context of bcis but um people have been talking about uh drone combat using brain computer interfaces 33:18 since i presented my first paper on this in 2007 and that's actually where we met george was a conference that was run by 33:24 paulo tripode and jessica wolfendale and you presented on the human terrain system and i presented on 33:30 bcis and drone combat and that was my first paper that i published as a grad student in 2007 and here i am 15 years 33:37 later giving you a very similar talk um i was at a far ridgeline exercise uh at 33:43 usocom down in tampa and one of uh the participants mentioned that it would be really interesting if we could pipe 33:50 optical data or sense data from an unmanned aerial vehicle not a drone 33:56 into an operator through their optic nerve without having to give them a camera screen or augmented reality right 34:03 they can see whatever the camera they are currently uh 34:08 fed into can see right and actually represent this as the mental state of sense data now we don't know how to do 34:14 that yet but again in the last two years we have been able to create force feedback and sensory input for 34:20 prostheses using brain computing interfaces so it's not just that you can manipulate objects with your prostheses 34:25 using a bci when you touch something you experience the sensation of touching it through the metal arm 34:32 so we're closer than you think and then in 2018 um a study with rats showed that 34:37 you could use a bci in a rat to add uh infrared 34:43 vision they called it they didn't actually pipe it directly into the optic nerve because they didn't know how to figure out how you add an extra layer of 34:49 wavelengths into the mental states of the rats because they don't know what rats think but what they did do was they 34:54 added it to uh they plugged it into the rat's brain and they were able to show that in complete 35:00 darkness rats were able to use thermal cues using their attached infrared devices to navigate a maze right so 35:08 we're already at that edge where we might be able to add extra sensory capacities to war fighters 35:14 through a brain computer interface and then ed almondii and bill caspia both darpa program officers bill's been 35:21 doing this for gosh a really long time and they've been revolutionizing this entire 35:27 program and one of their holy grails is the ability to use a brain compute interface to attach a person to an 35:33 artificial intelligence right to take that thing that we've been hearing all day that ai is designed to do the things 35:39 that humans do but it can do better and leave the bits that humans do really well to the humans and just 35:45 add them together right human machine hybridization and this is a a real potential goal of 35:54 of bcis now i want to say something that i think often 36:00 strikes people as controversial which one of the things i think there's a problem with end users is that many end users aren't by themselves particularly 36:08 interesting when it comes to the ethics of novel technologies right so my my 36:14 mentor larry may uh who's at washu um the first time i gave my talk on bci is 36:19 he put his hand up and he said look i'm sorry this is a little mean but why is this important on you which is a second year grad student is 36:25 just crushing to hear um but he was right in the sense that 36:31 if you're piling on a uav and that uav has weapons capability and 36:36 you perform an act that is a violation of ihl does it matter whether or not you did it 36:42 with your brain or whether you used a joystick which spoiler is in some causal sense attached 36:49 to your brain um to do it right it's not clear that bci's pre uh kind of 36:56 produce these novel kinds of ethical issues in this way right um so 37:01 the upshot is if the us government is doing something that it shouldn't be doing and it's doing it through bcis 37:07 then it doesn't matter whether or not the bci is involved or not it's still a putative unlawful unjust unethical 37:14 action um i'm very fond of this right please buy my book um you know there's a couple of 37:20 chapters devoted to to what jonathan marks um and cordelia fine have have referred to 37:26 as neurohype right um and this is not a reflection on neuroethics per se during 37:31 the genomic era and the human genome project we had exactly the same problems we find a bunch of end-use cases because scientists are really interested in them 37:37 and we go oh what are the ethical issues of these it turns out that they're exactly the ethical issues that we 37:42 expected all along because the enemy is still us um and you know we're still worried about our own behavior 37:48 but i think that there are important new developments that we can think about and those are bcis as platforms all right so 37:55 platforms are use independent in some ways right think about the internet we can use the internet from everything from family photos uh to warfare and 38:02 famously in the musical avenue q pornography um the big question is what are the ethical 38:09 issues around bcis as platforms and jim's already hinted at some of this but i think that this is kind of where some 38:14 of this really interesting work can be done and where some of the risk assessment has to be very careful in 38:19 part because we're not just looking at the 10-year kind of milestones around what this research can possibly do but 38:25 because we're trying to engage in a program of institutional change we do have to think about the 20 and the 50 years 38:31 just because institutions take a lot of work to change and if you don't change them in the right way the first time you 38:37 may not get a second chance so there are ethical issues around using bcis as platforms that is the tool that 38:45 could be used for a range of other end-use cases which is how we develop them from a concept of 38:51 final use their properties as military technologies and and their users 38:58 proliferation and restricted uses these are the kinds of issues that will 39:03 arise from bcis as platforms rather than merely as end use cases because a lot of the end use cases i've kind of just 39:10 advocated for end-use cases that we already kind of know how to deal with 39:15 so i'm going to adopt um what a philosopher would call a casuistical measure method in the sense that i'm not going to give you an analytic framework for 39:21 how we should think about these i'm going to give you three i think really important or key cases for us to think about as we kind of move forward into 39:28 discussion um the first is and this is this is my baby here um so the first is 39:34 the ethics of bci research uh for military applications and the research does a lot of work here right how do we 39:41 get from our n of 35 that is the total number of bcis in people's brains today 39:48 to a clinically validated bci that the fda 39:53 or whichever regulatory apparatus we need right or we think is justified says 40:00 you can go for this right and that may include cost effectiveness measures it may include long-term follow-up right 40:06 how does the bci wear i teach a whole course on the ethics of medical device design and let me tell you the number of 40:12 experiments uh where people have you know had an artificial hip put in um and everything looks good and they go away 40:18 and then the five year follow-up and everyone's grown a biofilm right and they're now on all on long-term antibiotics are having to have their 40:23 hips replaced again um because there's weird bacteria growing in their system right we're gonna put something in you 40:29 potentially even if it's just through a pill right that may grow things that really shouldn't 40:34 really have no business being in your brain right so we have really strong manufacturing concerns that we need to 40:39 think about and this kind of provokes some research ethics issues um before i get off the necessity topic i 40:46 just want to point out that modern american bioethics treats research as justified 40:53 typically when there is a therapeutic use right if you go to an irb 40:58 and they go what's your sample population for throwing this bci in people and you go healthy volunteers 41:04 with nothing wrong with them but we'd really like to find out what happens the irb 41:09 as as a chair of an irb is going to go oh no right and maybe not even for ethical 41:14 reasons right i mean i i'm a huge proponent of enhancement research that's that's my gig right now 41:19 but for for the purposes of abiding by the common rule right it's very hard to get 41:25 a justification on the common rule which guides federal research at least federally funded research elon musk can 41:31 do whatever he likes because it's his money um in this country without a therapeutic justification and that's a 41:36 huge issue that actually requires serious regulatory overhaul the dod may be able to do it through a dodi right 41:42 but ideally we want this to be agency-wide right and that would require an an prm 41:49 and for those of you who don't know the history of updating the common rule the last and prm was an absolute disaster 41:54 that consumed half of bioethics for almost a decade as people argued about exactly which bits did it were and were 42:01 not going to be updated uh in in the common rule but even if we did that let's say okay 42:07 you can do enhancement research when it's necessary to do enhancement research the next question comes in what 42:12 constitutes necessity for enhancement research right now there are strong and there are 42:18 weak accounts of necessity and anyone who's taught the ethics of armed conflict will kind of note that uh what it can what it means for an act to be 42:25 necessary in military ethics for example really depends on who you're reading on the day right there are weak accounts of 42:30 necessity where it simply just has to be something that aids in a particular 42:35 armed conflict operation or allows a force to perform its operations better but there are also strong ends to 42:40 necessity right or strong accounts of necessity again advocated by people like jens olyn and larry may where there has 42:46 to be a really direct causal mechanism that you can show that leads to better outcomes in conflict right now 42:52 fortunately we're not bounded by this in medical research and that would be overly stringent 42:59 but i want to point out that this doesn't allow for simple harm reduction approaches to enhancement research right 43:06 because one way you could do enhancement research in a way that i and my my postdoc blake have advocated recently in 43:12 a paper um is simply a harm reduction approach right and here's the idea i used to teach at the university of 43:17 pennsylvania um and i noticed very quickly that a lot of my kids were a little jittery and i kind of asked some 43:23 questions and it turns out that these uh future aids for senators and legal clerks for supreme court justices were 43:29 not only taking adderall when they absolutely did not have adhd they were taking adderall and modafinil together 43:35 in order to cram for finals right and i went over to the medical school and i said hey 43:41 what are the interaction effects between those two things um and we did a little bit of research it turns out no one knows 43:47 right total black hole why because we haven't done good research on adderall or 43:52 modafinil as a performance enhancement we just don't have the clinical data on it um and so i've recently advocated for 43:58 what's effectively a harm reduction approach to enhancement which is i've had an untold number of cups of coffee today 44:05 um right i am a performance enhanced individual right as we all are i suspect but if i'm going to do this anyway right 44:12 the harm reduction approach isn't you can only do this in situation x it's if you're going to do this 44:18 be safe do it with friends right and if things get weird tell someone and go get help 44:24 right um which is something that we're more used to with kind of permissive parents let's say around 44:30 their teenagers than we are with fully functioning adults but again a member of this fire ridgeline exercise 44:36 at usocon once said to me i'm not in the ncaa right if i need it and it helps me complete my 44:43 mission i will do it i will take it i will have it put in me right so i think that's a really good 44:50 reason to do performance enhancement research but if we think that necessity matters then we need to be really careful about what necessity this is 44:56 because we might not be permitted to simply test whatever we like on service members for the purpose of performance 45:02 enhancement we actually might have to show a pathway to a suite of performance enhancements for for example a particu a 45:08 specific service or across the joint services that gives us kind of 45:14 that shows some important operational benefit that we're trying to achieve rather than 45:19 and this has happened absolutely um in the united states military usually prior to world war ii but also 45:24 post world war ii um where we go we're going to give you a bunch of things and we're going to see what happens right which is not an ideal way to approach 45:32 ethical research all right finishing up um i want to talk about the returning soldier real quick uh 45:39 and jim's already foreshadowed this i think that bci bcis going back out of the community 45:46 are going to present certain security risks right there's an information security problem because at its heart 45:52 what a bci is is someone who can is is an object that connects the information inside your brain to the information on 45:58 a set of computers right and what it does is it presents a possible vulnerability to 46:04 the computers it's connected to to the brain or brains because brain brain communication is on darpa's wish list 46:10 for bcis um it is connected to but also within the hardware itself right 46:15 hardware stores information and it may store very important information and especially if we are going to implant 46:22 these in people whether that's invasively in surgically or through magic pills we definitely want to think about 46:27 whether we're going to take it back out and if we are going to take it back out we need to be really clear on what the 46:32 possible side effects of taking that back out are and this goes to the level of what a 46:39 small literature refers to as neurosecurity which is that if you can get into someone's brain then you can 46:45 read the information off of it eventually we will be able to know how to interpret that information reliably we're still getting there but there is a 46:52 whole new set of operational considerations around whether or not someone who's had a bci in them or 46:57 continues to have a bci on them becomes an information risk by themselves and what we do with them 47:02 or what we should allow to be done with them as they go about their life post service 47:08 all right finally policy and restricted uses right so i think and again this is science 47:15 fiction but i think this is really important we are starting to see the ability for bcis and ai working together 47:20 to modify memory right it is a really important therapeutic application in for example dealing with 47:26 post-traumatic stress syndrome and also moral injury however if you could modify memories you could 47:32 potentially modify putative violations of ihl or the memory of putative violations of ihl right the 47:39 robocop problem right you could allow someone to do something or make them do it and then you could erase memory of 47:45 the fact right you could eliminate one of the witnesses in the form of the memory of the 47:50 of the of the offender um and importantly right and i don't want to kind of besmirch paul the hovern by 47:57 talking about the remake of robocop for very long you could do it potentially in a way where it's not clear that the 48:03 person who's having their memory modified is going to know that their memory has been modified and we already know this from cognitive psychology is 48:09 that you can implant and remove memories through behavioral modification where the person does not realize that 48:15 there that their memories are fake they have a real experience in their own mental states it is continuous with 48:21 everything else right the brain isn't good at going oh that's definitely a thing that didn't happen to me right um 48:26 and so that makes for a really challenging set of circumstances and there may be policies that we want to put in place that limit the ability of 48:33 what we can and can't do with a bci when it comes to memory modification as a matter of law 48:39 doing an international instrument i am very skeptical about an instrument international instrument around neurotechnology people talk about a 48:45 neuro treaty every couple of years and i roll my eyes kind of in ways that my doctor would 48:51 rather i do not but the main reason is that i think that the experience of uh of the so-called 48:57 campaign against killer robots really show that there's real limitations on multilateral instruments at least as they are designed today and certainly as 49:04 they are designed when russia is deciding not to play ball around what we could 49:10 plausibly do in the international space here i don't think that we shouldn't have that conversation about what the 49:16 international order should be but as someone who works on the biological weapons convention in his kind of other 49:21 role i think that we need to be really clear about what the actual ask we could make for something like 49:26 neurotechnologies is right and whether we want to rely on a verification-based mechanism where we say these are the 49:32 limits on what you can and can't do with bcis and we're going to come in and check your factories and things like that or if we want to engage as jim has 49:38 seemed to illustrate in part in what the biological weapons community would call a confidence 49:44 building exercise in which rather than start with a level of restrictions and go on mutual uh fear and mistrust we 49:51 want to start advocating for people to open up about exactly what they're doing with their bcis as a show of good faith 49:57 so that we know that we're all on the same page regarding certain kinds of norms and even though our trans-pacific 50:04 near-peer competitors may have certain kinds of norms they are certainly able to be negotiated with in certain 50:10 ways and there may be common norms just like the common norm that we share against biological weapons where for example certain kinds of cognitive 50:17 liberty violations are ruled out as a matter of the ways that we design and implement our bcis 50:23 all right that's me thank you so much for your time i'm really looking forward to this discussion uh with you and with jim 50:29 [Applause] 50:37 questions from the group sir straight back right down the middle 50:48 uh jason ingersoll from the united states military academy um gentlemen that was both the all's 50:53 brief was like one of the most interesting briefs i've ever seen my question is in terms of you mentioned building 51:00 resilience and the possible implications of treating something like ptsd 51:07 there's clear advantages to soldiers who are combat veterans who already have ptsd but what happens if 51:15 this technology is given to a government does it increase the likelihood of them putting soldiers 51:23 in situations where they can develop ptsd and what are the implications if militaries 51:29 or the soldiers and militaries are essentially immune to these like highly traumatic situations 51:37 it really speaks to nick's kazoo history model number one but it's been actual consideration 51:42 number two the idea is if we engage preemptive screening utilizing these different 51:48 forms of neuroscientific techniques and technologies are we able to pre-identify certain individuals who may have 51:53 predispositions or certain biases towards what we call performance capability trajectories or if you will 52:00 certain vulnerabilities and the idea there is not to keep these individuals out of military service per se but to 52:05 recognize that some individuals may need to be segued into particular silos of operational duty that are more 52:12 accommodative to what their particular biological risks are they run the gamut literally from the biopsychosocial from 52:18 the genetic all the way to the phenotypic and from the cellular to the social there is a model for doing this for 52:24 example um we can take a look at our israeli colleagues where national service is an obligatory responsibility 52:30 and we find is that the level of natural service national service including the military 52:35 engages a broad spectrum of what is considered to be neurodiverse individuals but over and above that the 52:41 level of screening also is very very important to ascertain individuals vulnerability to risk and harm 52:47 this is not a precedent that has to be made we've already done it in the military as a former aviator and the 52:52 idea of going through aviation screening the idea of taking an individual's geno and phenotype for example for sickling 52:58 cell heterozygous diathesis who's capable of engaging certain types 53:04 of activities are they going to be physically qualified and if the answer is no can we physically qualify them 53:10 and let that be a risk that is assumed already exists at least on paper and in 53:15 precedent in practice and the question then is as we expand the portfolio and palette of what is available will we 53:21 then translate those things into practical use which is an available question nick so i take it part of your concern is 53:27 almost like a moral hazard argument which is so we want to prevent this bad thing from happening but maybe that's going to lead us to do more of the bad 53:33 thing in order to kind of meet our own needs and i think that's a really 53:39 i haven't thought really hard about this but i think that there's a couple of things to say to it um one is is that we 53:45 might still have a population level view that gives us some idea that that's happening right we might actually be able to look for it right and so i mean 53:52 not not all treatments for ptsd are successful and even bci based so the way that they typically do bci 53:59 based treatments for ptsd is they do a form of exposure therapy but they use a brain a non-invasive brain computer 54:05 interface to modify uh transcranial direct stimulation so passing an electric current through a person's brain 54:10 um and the bci watches for the changes in neural signal and modulates the the 54:16 electric current it has a decent effect size right but we would still see like you know in this 54:23 kind of future world if if some state military is like huh we can cure ptsd now let's make our soldiers do things 54:28 that are even worse right and we'll just cure them later we would still see a definite uptick in in a lot of this so 54:35 there is a way to look at this um but the other thing is that i think this comes down to 54:40 the way the institution is going to be structured around some of these neurotechnologies right so we still want 54:46 to kind of make sure that we're enforcing all of our other ethical systems and guidelines um which includes 54:51 you know not i mean international law is pretty clear on this right that we shouldn't be putting soldiers in harm's way above and beyond 54:59 the necessary achievement of their mission right and this goes back to um i think it's a 1992 international court 55:05 case um based in israel around the rights of soldiers being not totally inviable right and the 55:11 unlimited liability thesis not being kind of held at least as a matter of international law um i'm not a lawyer i 55:16 can feel a lawyer staring at me it's very confronting um but i think that's kind of the other direction we want to 55:22 go in this is like yes the moral has it exists but you don't try and it's very hard to try and pin the moral hazard 55:28 through the thing that causes the hazard itself what you want to do is you want to build the institution around the hazard so that the hazard can't happen 55:34 rather than try and um kind of catch the whip at its tip if kind of that metaphor makes any sense 55:41 yes 55:47 thank please hi uh william applegate george mason university um what i noticed in the way you were 55:53 talking about ethics in terms of bcis uh you seem to be using more of the language of how do we figure out what 55:59 constraints we should have on the proliferation development of these that's different than the kind of 56:06 ethical talk we had earlier today in terms of ethics being like a competitive advantage in terms of the industry 56:12 research whatever how how would you rectify like if we're 56:17 going to have bcis i would think at some point the ethical discussion is going to be well here are some competitive 56:24 advantages for ethics for ethical behavior rather than like you know like you said short-term ruthless brutality 56:30 and we'll deal with it later so i wanted you guys to give us some insights on like where the ethics can make us better not 56:36 just keep us on the straight and narrow um i mean so part of 56:42 the way that i approach that is that um you know i i haven't been a lot guy in a while um i used to teach at the 56:47 australian defense force academy in like 12 years ago um i'm a non-proliferation guy now right so i'm one of the really 56:55 like dusty humans that shows up to geneva and is like maybe we shouldn't make biological weapons maybe that'd be a bad idea 57:01 um so i do think in the language of restrictions right and i kind of own that um 57:06 where does the ethics make us better so one of the things here again is that we 57:11 talk about performance enhancements all the time if you do a literature if you go out on pubmed right and you type in 57:17 performance enhancements um and you do a literature search you'll come up with a few thousand results which sounds like a lot but pubmed is 57:23 you know what 25 million articles and 10 million of them are about covered these days um [Music] 57:29 and eight percent of all science all science done in 2020 was covert science just so 57:35 clear um it's crazy uh what you'll find is that all the performance enhancement research is like 57:41 in a trial of cyclists involving anaerobic time trial activities does a liter of 57:49 beet juice three hours before competition increase like output or 57:55 speed or something like that why because you can pass this off an irb because the irb is like oh this is a no risk problem 58:02 right i mean like if you get better at cycling for taking from drinking beet juice fine um but you're not going to get hurt from 58:08 drinking too much beet juice as far as i know um so what the ethics can really do here is 58:14 actually create a clinical pathway for enhancement research right which is hugely important right um i gave this 58:22 example to paul who who's now kind of left us um so 58:28 very very very long story short uh in 2020 i was finishing a book going for 58:33 tenure and my wife was dying um she survived uh which is really 58:38 important to this story um i i wasn't sleeping right so i rang my pcp 58:44 and my pcp said yeah you have shift work disorder right i'm like i don't work a shift i'm an academic she's like no you 58:49 ha you you have shift work disorder i'm going to prescribe you modafinil right which for those of you who don't know is 58:54 a wonder drug that allows you to stay up for up to 60 hours without needing sleep and unlike lots of amphetamines which 59:00 will also do the trick um you don't lose a lot of your executive function and so i'm like great i'm going to do my edits 59:06 i'm going to write my book um pumped up on modafinil it'll be great what i found out is a colleague of mine sent a paper 59:12 um actually i think a paper by david's on on torture um and said you should read 59:18 this and think about it and i started thinking about it and i realized that i had no creative powers after 60 hours on modafinil um 59:25 i wasn't expecting that i thought i was going to be immune to all the problems of sleeplessness it turns out not i 59:30 asked a couple of my colleagues about this they had exactly the same experience you can do edits like like i could do line edits on 59:35 my book for days on modafinil but i couldn't think creatively about a single paper on modafinil is that a clinically 59:41 meaningful effect is that a large effect size nobody knows but we do have one study with chess players that showed that 59:47 their self-reported skill after 60 hours playing chess on modafinil was level 59:53 but the complexity of the moves that they were making had degraded over 60 hours right so we know that creativity 59:59 seems to suffer right now if you're acting on muscle memory and you're a modafinil that's great but if 1:00:04 you're on a long range op and you're being asked to decide whether the data 1:00:09 you're being given signifies a target and having to make a decision about lethal force creativity might actually 1:00:15 be good right um and so how do you do this well the answer from 1:00:21 the bioethicist is you do a clinical trial can i get that clinical trial approved not in this country right um so 1:00:27 now we have an ethical problem where we could actually revise bioethics writ large make the fighting force better and keep 1:00:33 people safe all at the same time so that would be like my triangulation for the can can ethics make us better problem 1:00:40 so i'm going to be the devil's advocate um and and do so by a bit of example 1:00:46 nick is absolutely right if you take a look at the pubmed literature a lot of the stuff that's out there on performance enhancement particularly as 1:00:51 relate to substances and not necessarily neurotechnology is not only vague it's really ambiguous in terms of the results 1:00:59 and a lot of it is i'm not going to say useless but it's of nominal utility 1:01:04 however if you go to the restricted literature and i had to be very very careful in other words this is not 1:01:10 guilty knowledge but the capabilities of doing certain things with regard to not only substances but also technologies in 1:01:16 terms of increasing things like vigilance performance capability in those ways that are operationally meaningful 1:01:23 is there's a building body of evidence that suggests that there this is vectorable now again what is the effect size is it you again is an issue as nick 1:01:31 very well illustrated that is in some ways constrained by the actual paradigm and the protocols 1:01:36 themselves but here's the break break there is an entity that realistically 1:01:42 becomes the valuable silo for this to occur and ethics should advance it and i alluded to this as did nick 1:01:48 and that is military occupational preventive medicine the issue here really is one of if you 1:01:55 will the the tolerant although highly parentalistic not paternalistic parent 1:02:02 who's able to see over relative horizons of probability possibility and appreciate potentiality 1:02:10 and recognize what might be important to ethically be permissive of rather than constrictive of so as to 1:02:17 explore the safety and effectiveness space not just efficacy space and that's a work in progress 1:02:23 one really extra quick comment on this one of the reasons also to bring it out of the restricted literature and bring it into the light 1:02:30 if you will is to go back to to your point about ptsd um 1:02:36 every doctor in america today could really do with some therapy right now 1:02:41 right and so it would be really useful to have some of this literature available because the problems that we 1:02:46 are often trying to treat and the enhancements that we're often trying to give here in the military have civilian 1:02:52 counterparts and the ability to kind of not only kind of help our kind of civilian counterparts with their own 1:02:59 problems but also then have open literature for example for primary care providers who are dealing with return 1:03:05 servicemen uh the ability to know what the enhancement has done and what the long-term consequences are is going to 1:03:11 be really really important as we think about that post-deployment post-retirement enhanced warfighter 1:03:16 paradigm which we're entering into soon sir 1:03:24 thank you very much thank you very much gentlemen mike wood marine forces cyber command so you both alluded to 1:03:30 uh one of one of the things you're thinking about is once once somebody that has been bio 1:03:36 enhanced is now leaving the service do we now have to take this take this enhancement back do we now have to uh do 1:03:43 we shut it off what are the implications there aside from that what considerations are being made that 1:03:50 if we bio enhance an individual that individual is no longer considered 1:03:56 human that that individual now loses their humanity and we now treat that individual as 1:04:02 uh as inventory yeah yeah that's it's a wonderful question you know it 1:04:07 gets it gets to the the limits of cyborgization and it's a real word that's paolo bananti's 1:04:13 i think his is buzzword at paula bonanti in the university of rome uh 1:04:18 the question here and you want to avoid some of the the rhetorical stuff the posthuman the the political trans-human 1:04:25 you want to take a look at the human in transition and characteristically you say humans are tool users ever more of those tools are incorporative tools not 1:04:32 just tools that are executive tools but the question then becomes what is the nature of quote the humanity that you're 1:04:39 looking to preserve is that an optimized human according to a range of functions and of course those actual 1:04:45 distinctions of what represents therapeutics intervention capabilization and the ultimately modification 1:04:52 structural modification beyond what is for example a physiological anatomic norm those are defined the the air force 1:04:58 for example was very interested in that a number of years ago and they wanted those definitions but there's more to it it's a 1:05:03 philosophical issue and i'll i'll pass the ball to nick on that because i think it speaks very very strongly to both 1:05:08 axiological and ontological issues that go along with this if i do x y and zed to you are you mine 1:05:15 do i own that part of you and even if that's implicit in other words if i do something to you that 1:05:21 involves some irreversibility is that irreversibility some level of countenance or provenance 1:05:28 that i either have to be responsible for or in some way i actually own by virtue 1:05:34 of obligatory own or perhaps intention alone i'm not a lawyer 1:05:40 but it's a philosophical question too nick over to you so i think um so let's break that's been two parts i mean one 1:05:46 and i'm just gonna kind of throw my cards down the table i am very skeptical about the idea that we stop being human 1:05:52 when we enhance ourselves um at least kind of for a huge kind of 1:05:57 range of enhancements right there might be like i don't know we become you know again science fiction beings of pure energy something something um but 1:06:05 to give you an example right uh i have two steel pins in my shoulder from if you ask my wife terrible mistakes if you 1:06:10 ask me a life well lived um and when i went in to get my my shoulder 1:06:16 surgery up in massachusetts at mass general um i'm i've done uh rounding on wards as a 1:06:22 bioethicist before and i'm normally like the youngest healthiest fittest dude on the ward and i walked into this ward and 1:06:28 i'm the oldest fattest ugliest dude on this ward and i'm like what is going on that's because the injury that i had is 1:06:33 normally one that nfl players get right um and so i was hanging out with four nfl players getting nerve blocks which 1:06:41 is kind of wild and i mean these dudes are enhanced right i mean 1:06:47 like i mean they are right i mean you know and like like there's an open question it's like 1:06:52 are they still people like like and you know i think that this is kind 1:06:58 of where we really have to think about humans as tool users right is that we have all at some point in our life i think at 1:07:04 this point taken something on right you know whether it's you know cadets at the gym right who are 1:07:10 thinking real hard about how much protein they can get into their bodies right um through to brain computer 1:07:15 interfaces and stuff like this now that kind of then goes to kind of property 1:07:21 issues right and kind of once we put something in you does that stay your property or do you become our property 1:07:28 right now as i understand the constitution has some things to say about that but 1:07:33 there really are kind of some deep questions about matters of policy that we're going to have to answer about what 1:07:38 we do to people and what they get to take home with them right i think the sensory 1:07:45 data example is actually the best one for me right let's say that you're allowed to keep your bci because it's an off-the-shelf bci right you know elon 1:07:51 musk provides bcis to the whole service right and but but everyone else has them right so that's not the problem it's 1:07:57 that when you go into the service with your bci we attach new sensors that no one no other human has right you can see 1:08:03 electromagnetic radiation in the ultraviolet spectrum or something like that it's very cool um 1:08:09 do we take that away from you and send you home right i think that 1:08:15 on the surface of it it feels like once you have lived with that for a certain amount of time we're effectively 1:08:21 blinding you when we when you leave the service right um and that's because human brains are 1:08:26 plastic right you know and jim knows much more about this than i do but you are going to get used to being that 1:08:32 person right um people who lift weights get used to being people who lift weights and when they stop lifting 1:08:38 weights they go through weird stuff right um and then like let's 1:08:43 so we're gonna turn that up to a thousand with like i can see ultraviolet light and now i can't see ultraviolet light anymore i had a 1:08:49 i had an artificial intelligence like buddy that lived in my head and like gave me tips on like you know how to 1:08:56 like process intelligence and now he's not there anymore and yes i called him a hymn because we already know that 1:09:01 soldiers will gender their remote control cars that they use to find ieds and give them funerals when they blow up 1:09:08 right so we're gonna be so much more attached to the dude who lives in the chip in our brain right um 1:09:13 and i think that there are really good normative reasons to suggest that once you put that in someone's head and once you give them that sight or that new 1:09:20 capacity you are either you have to kind of have really defined parameters about what you're 1:09:26 going to do when you take that away or you have to have certain kinds of commitments that 1:09:31 you've modified my body irrevocably right but it still may but it's still my body i gave you consent to the 1:09:36 modification but i didn't give you consent to take it away right and i think that that's going to be a huge both legal and normative challenge in 1:09:43 the future let me add to that the other issue is what happens if you get version 1.0 and five years later version 2.0 1:09:49 comes out and now you're obsolete and by putting the thing in your head in version 1.0 there's already been what we 1:09:55 call parenchymal reorganization in other words literally the nodes and networks change so you're not a candidate to get version 1:10:01 2.0 and just before ed tackles us both off the stage um 1:10:07 or if you get 1.0 2.0 is released 2.0 is not actually better but the military no 1:10:12 longer supports 1.0 just like anyone who's had a phone for too long right that's right right and then it breaks 1:10:18 and then it's like well sorry like like like the genius bar doesn't fix brain implant 1.0 anymore 1:10:23 you're just gonna have to either upgrade to 2.0 which costs god knows how much um especially 1:10:29 no no insurance company is going to insure bci anytime soon just so we're very clear um 1:10:35 i think this is a real problem right you know and and you may be committing to supporting service members with bcis for 20 30 40 1:10:43 50 years even if 2.0 or 3.0 or whatever comes out and so that's a that's an economic challenge as well that needs to 1:10:49 be addressed so much more could be said and so let's 1:10:56 continue the conversation over cocktails starting now and thank you gentlemen for that that's a perfect transition 1:11:08 so we have all the capabilities on the table now i think the relevant capabilities in tomorrow we're going to 1:11:13 delve more deeply into the ethical issues that you've brought up so eloquently so thank you very much