A couple of these comments got me thinking along similar lines today.
Karlsson talks about being born in '89 and thus among the last the last to remember a time before the internet. How this world changing technology grew and developed alongside, and intertwined with, his own. I sometimes wonder how different I might be if the same were…
A couple of these comments got me thinking along similar lines today.
Karlsson talks about being born in '89 and thus among the last the last to remember a time before the internet. How this world changing technology grew and developed alongside, and intertwined with, his own. I sometimes wonder how different I might be if the same were true for me.
As you might infer from my screen name, I was born in '73, and am thus among the youngest people to grow up entirely without the internet. More accurately, what we call "the internet" today is what at first we called the Web (some might recall the "information superhighway" branding). By most accounts "the internet" as an operational physical network was born the same year as me, but the HTTP protocol, allowing users the ability to navigate through web pages and hyperlinks, was unleashed on the world in text form the year I graduated high school, and in graphic form in the middle of my undergraduate days. It's the latter, of course, that brought the internet to most people's attention (including me).
It's maybe because of this that I have a particular nostalgia for simpler times, which is something I never thought I would say growing up. I was never anti-technology; in fact I felt lucky to be growing up when I was. I couldn't imagine how dull life must have been before television (my father told of being 14 when his family first owned one). Moreover, the increasing proliferation of electronic amusements throughout my youth (I mean, playing video games at home?! Get outta here!) promised an exciting future, one that I was eager to embrace. I learned BASIC programming as a pre-teen, eagerly consuming the code listings in "3-2-1 Contact" magazine, as I thrilled to observe the advancements in computer technology every few years. I was ready for the future.
I don't know when it was that technology started to seem like it was suddenly going too fast. Perhaps it was as a technical professional, experiencing the disillusionment of hard-won expertise becoming obsolescent. Perhaps it was watching the internet morph from a computer nerd's paradise of open-source code and online gaming into a digital version of the mall, where the cool kids now hung out despite having little appreciation for what they'd been gifted by the outcasts they'd always looked down upon.
Or maybe it was when it all just started to feel like too much. When some people honestly started believing that the baloney sci-fi future we'd all been ingesting in the form of countless books, TV shows and movies for all our lives was actually just around the corner and now worth carb-loading and sprinting toward. As if we hadn't learned anything from the conspicuous absence of our long yearned for jet-packs, people are now talking about creating conscious androids and downloading our minds into computer storage – stuff that most (but not all) people with significant technological expertise recognize as fanciful nonsense, but which people with a more casual relationship to technology take more seriously.
Maybe this is why I have never liked talking to computers, and stubbornly refuse to use voice technology, despite the fact that my parents have no problem with it. My voice is for communicating with people, and if I were to embrace the convenience of voice commands, I would prefer it to be without the familiarity of normal conversational speech (using some abbreviated set of explicit commands) and certainly not by invoking it with a human name. Maybe this is silly and obsessive, but I'm not alone. For example, there has actually been some concern among child psychologists about possible social developmental harms of children interacting with Alexa and Siri in a person-like style, without getting the feedback that would normally mediate and regulate our social interactions.
But beyond this, I see an overall reluctance among people to embrace things that blur the lines too much between the simulated, artificial world of technology and the real world of human beings. I think it's why VR and AR have proven costly failures, why 3D video has never been more than a fun gimmick, and why I am more sanguine about the threat of AI. One could argue there's even a loose relationship to the failure of cryptocurrency, in that it's simply too disconnected from the tangible and real (although the biggest disconnect there regards its lack of connection to the world of real problems in need of solving).
And yes, I suspect it is perhaps related to why Alexa and Siri have only been moderately successful. It all just feels like too much. And I find this incredibly heartening and reassuring.
I don't think VR has been a complete failure. The use cases are definitely getting pared down to what is actually useful using the tech as it stands while the obstacles to wider adoption are worn down. That said I think it will be transformative to online education sooner rather than later. It is perfect for transporting a remote student to a shared space with their fellow remote students while isolating said student from RL interference. It is far more engaging than a Zoom session and online students have to use a meeting solution for every class anyway. Asynchronous curriculums won't completely go away but don't see how VR doesn't become the default premium option for online education.
It will likely also get factored into on ground classes where appropriate as well. Want to take your students on tour of a dig site as a archeology professor? Simple. Want to see what a multivariable equation actually looks like graphed as a math major? Easy, and you can even rotate it around in your hands to see it from all angles. Want your students to dissect a cadaver in anatomy class? Trivial. Want to walk your students through a Japanese grocery store to show the product displays as a marketing professor? It can do that too. There is more difficulty justifying giving a pair of goggles to each student in this use case (I'm skeptical of computer lab type solutions and consumer adoption isn't widespread enough for BYOD) but I think it is close and more rentable/purchasable content libraries will put it over the top, no need for further hardware revisions (though that will help as well).
I don't doubt that VR has niche applications; like for training pilots and perhaps even surgeons. Assuming the quality of the simulations is up to par.
Other situations I'm more skeptical about. For instance, to me the benefit for online learning is marginal at best. A pair of noise cancelling headphones does just about as much for avoiding real life distractions, and I think most people wouldn't be comfortable with entirely shutting it all out anyway — local circumstances sometimes require attending to, distraction or not.
Also, is it really "far more engaging" than a zoom session? A lot depends on the quality of the simulation — would camera technology capture people's facial expressions and body language, and project 3D versions them into the simulation, thus correcting for one of the supposed downsides of 2D video? How would this work if only some people are using VR as an option? How much work or setup would this involve on the part of the teacher or the online university? I personally wouldn't have any interest in it, premium pricing aside.
Basically every application of the technology is going to have to weigh accuracy of simulation, cost of setup and implementation, and marginal value over simple alternatives like pre-recorded or real-time video. All of the learning applications you describe above strike me as cute, but certainly not simple or trivial and most likely not worth what would probably be significant extra cost.
Similarly, I've heard some people talk about "virtual vacations", which strikes me as naive. Sure, to the extent your vacations typically only involve the use of two of your senses and minimal body movement, along with the reality of possibly having to attend to circumstances in your local home environment at a moment's notice, I'm sure it will be just like a trip to the beach. For most people, the convenience just isn't going to justify swapping out real life for a flawed simulation.
Yep, I composed my first program on a TRS-80. We never had a modem or an online service, so my interests were restricted to the academic realm of programming. Probably why I took a detour and became a math major in college before getting back into coding.
A couple of these comments got me thinking along similar lines today.
Karlsson talks about being born in '89 and thus among the last the last to remember a time before the internet. How this world changing technology grew and developed alongside, and intertwined with, his own. I sometimes wonder how different I might be if the same were true for me.
As you might infer from my screen name, I was born in '73, and am thus among the youngest people to grow up entirely without the internet. More accurately, what we call "the internet" today is what at first we called the Web (some might recall the "information superhighway" branding). By most accounts "the internet" as an operational physical network was born the same year as me, but the HTTP protocol, allowing users the ability to navigate through web pages and hyperlinks, was unleashed on the world in text form the year I graduated high school, and in graphic form in the middle of my undergraduate days. It's the latter, of course, that brought the internet to most people's attention (including me).
It's maybe because of this that I have a particular nostalgia for simpler times, which is something I never thought I would say growing up. I was never anti-technology; in fact I felt lucky to be growing up when I was. I couldn't imagine how dull life must have been before television (my father told of being 14 when his family first owned one). Moreover, the increasing proliferation of electronic amusements throughout my youth (I mean, playing video games at home?! Get outta here!) promised an exciting future, one that I was eager to embrace. I learned BASIC programming as a pre-teen, eagerly consuming the code listings in "3-2-1 Contact" magazine, as I thrilled to observe the advancements in computer technology every few years. I was ready for the future.
I don't know when it was that technology started to seem like it was suddenly going too fast. Perhaps it was as a technical professional, experiencing the disillusionment of hard-won expertise becoming obsolescent. Perhaps it was watching the internet morph from a computer nerd's paradise of open-source code and online gaming into a digital version of the mall, where the cool kids now hung out despite having little appreciation for what they'd been gifted by the outcasts they'd always looked down upon.
Or maybe it was when it all just started to feel like too much. When some people honestly started believing that the baloney sci-fi future we'd all been ingesting in the form of countless books, TV shows and movies for all our lives was actually just around the corner and now worth carb-loading and sprinting toward. As if we hadn't learned anything from the conspicuous absence of our long yearned for jet-packs, people are now talking about creating conscious androids and downloading our minds into computer storage – stuff that most (but not all) people with significant technological expertise recognize as fanciful nonsense, but which people with a more casual relationship to technology take more seriously.
Maybe this is why I have never liked talking to computers, and stubbornly refuse to use voice technology, despite the fact that my parents have no problem with it. My voice is for communicating with people, and if I were to embrace the convenience of voice commands, I would prefer it to be without the familiarity of normal conversational speech (using some abbreviated set of explicit commands) and certainly not by invoking it with a human name. Maybe this is silly and obsessive, but I'm not alone. For example, there has actually been some concern among child psychologists about possible social developmental harms of children interacting with Alexa and Siri in a person-like style, without getting the feedback that would normally mediate and regulate our social interactions.
But beyond this, I see an overall reluctance among people to embrace things that blur the lines too much between the simulated, artificial world of technology and the real world of human beings. I think it's why VR and AR have proven costly failures, why 3D video has never been more than a fun gimmick, and why I am more sanguine about the threat of AI. One could argue there's even a loose relationship to the failure of cryptocurrency, in that it's simply too disconnected from the tangible and real (although the biggest disconnect there regards its lack of connection to the world of real problems in need of solving).
And yes, I suspect it is perhaps related to why Alexa and Siri have only been moderately successful. It all just feels like too much. And I find this incredibly heartening and reassuring.
I don't think VR has been a complete failure. The use cases are definitely getting pared down to what is actually useful using the tech as it stands while the obstacles to wider adoption are worn down. That said I think it will be transformative to online education sooner rather than later. It is perfect for transporting a remote student to a shared space with their fellow remote students while isolating said student from RL interference. It is far more engaging than a Zoom session and online students have to use a meeting solution for every class anyway. Asynchronous curriculums won't completely go away but don't see how VR doesn't become the default premium option for online education.
It will likely also get factored into on ground classes where appropriate as well. Want to take your students on tour of a dig site as a archeology professor? Simple. Want to see what a multivariable equation actually looks like graphed as a math major? Easy, and you can even rotate it around in your hands to see it from all angles. Want your students to dissect a cadaver in anatomy class? Trivial. Want to walk your students through a Japanese grocery store to show the product displays as a marketing professor? It can do that too. There is more difficulty justifying giving a pair of goggles to each student in this use case (I'm skeptical of computer lab type solutions and consumer adoption isn't widespread enough for BYOD) but I think it is close and more rentable/purchasable content libraries will put it over the top, no need for further hardware revisions (though that will help as well).
I don't doubt that VR has niche applications; like for training pilots and perhaps even surgeons. Assuming the quality of the simulations is up to par.
Other situations I'm more skeptical about. For instance, to me the benefit for online learning is marginal at best. A pair of noise cancelling headphones does just about as much for avoiding real life distractions, and I think most people wouldn't be comfortable with entirely shutting it all out anyway — local circumstances sometimes require attending to, distraction or not.
Also, is it really "far more engaging" than a zoom session? A lot depends on the quality of the simulation — would camera technology capture people's facial expressions and body language, and project 3D versions them into the simulation, thus correcting for one of the supposed downsides of 2D video? How would this work if only some people are using VR as an option? How much work or setup would this involve on the part of the teacher or the online university? I personally wouldn't have any interest in it, premium pricing aside.
Basically every application of the technology is going to have to weigh accuracy of simulation, cost of setup and implementation, and marginal value over simple alternatives like pre-recorded or real-time video. All of the learning applications you describe above strike me as cute, but certainly not simple or trivial and most likely not worth what would probably be significant extra cost.
Similarly, I've heard some people talk about "virtual vacations", which strikes me as naive. Sure, to the extent your vacations typically only involve the use of two of your senses and minimal body movement, along with the reality of possibly having to attend to circumstances in your local home environment at a moment's notice, I'm sure it will be just like a trip to the beach. For most people, the convenience just isn't going to justify swapping out real life for a flawed simulation.
This hits me where I live. I subscribed to 3-2-1 Contact, too. And had Prodigy. And a Tandy.
Yep, I composed my first program on a TRS-80. We never had a modem or an online service, so my interests were restricted to the academic realm of programming. Probably why I took a detour and became a math major in college before getting back into coding.