With the exception of "Home Soil"[TNG1], in which Crusher instructs the computer to theorize on something, most of the time the TNG era showed us computers used in a way not dissimilar from computers today. You tell it to provide information, make your own theory or determination, and run with it. Or, via the vocal equivalent of clicking a "beam me up" icon, you tell it to do something it's already programmed to do.
In TOS, though, it seemed that the computer . . . for all its whirring and clicking . . . was doing more thinking. There seemed a more frequent use of computer theorization. For instance, in "Mirror, Mirror"[TOS2], Kirk poses a few questions to the computer and lets it do the theorizing, and even provide an instruction list for what to do with the theory's application. Scotty followed along with its reasoning enough to know he'd need some help doing the work, but that's it.
It may be that TOS was actually closer to future reality in that case. In a rather interesting article, it's suggested that sheer volume of information along with mathematical correlation might allow for a computer that serves more as analytic theorizer than info-search-tool a la Google.
To be sure, this isn't necessarily a new idea. There have been many science-fiction stories about all-knowing thinking machines. Among other iterations, there was "Cyclops" from David Brin's novel The Postman, which also featured a "data net" along the lines of what you're currently surfing.
But presumably, as we've gotten to know working PCs, the idea of what they're capable of has declined to some extent. That is the best explanation I can think of for the less impressive way they were written in the TNG era. To be sure, the computers didn't seem to be any dumber by any means, but they simply weren't asked that many theoretical questions as I recall.
In TOS, though, it seemed that the computer . . . for all its whirring and clicking . . . was doing more thinking. There seemed a more frequent use of computer theorization. For instance, in "Mirror, Mirror"[TOS2], Kirk poses a few questions to the computer and lets it do the theorizing, and even provide an instruction list for what to do with the theory's application. Scotty followed along with its reasoning enough to know he'd need some help doing the work, but that's it.
It may be that TOS was actually closer to future reality in that case. In a rather interesting article, it's suggested that sheer volume of information along with mathematical correlation might allow for a computer that serves more as analytic theorizer than info-search-tool a la Google.
To be sure, this isn't necessarily a new idea. There have been many science-fiction stories about all-knowing thinking machines. Among other iterations, there was "Cyclops" from David Brin's novel The Postman, which also featured a "data net" along the lines of what you're currently surfing.
But presumably, as we've gotten to know working PCs, the idea of what they're capable of has declined to some extent. That is the best explanation I can think of for the less impressive way they were written in the TNG era. To be sure, the computers didn't seem to be any dumber by any means, but they simply weren't asked that many theoretical questions as I recall.
5 comments:
There is a theory, that in order to avoid "Cylon Problem" (Or M-5 problem) some vital elements of "thinking" were deliberately removed from computers. After all, AI(or Semi AI) can be hosted on computers, as Holodoc shows, but the main computer itself lacks such traits entirely.
I find that an interesting theory, but seem to recall references to the at least partial awareness of the ship's computer systems, which could cause problems with it.
As you've said, though, we've seen that Starfleet systems are quite capable of handling the data processes of a cybernetic intelligence (be it artificial or actual), so there must be some reason why the ship computers themselves, which are undoubtedly capable of vast intelligence, are not sentient, either in reality or artificially.
The computers might be limited to I/O functionality, i.e. "think only when specifically requested to do so."
I would think this is related, rather, to the ST canons against artificial intelligence, i.e. that man can't create anything higher than himself-- this always leads to disasters in the plots, like with Khan (i.e. genetic enhancement) or the M-5 (i.e. a human computer).
However in Star Wars, this seems to be a mechanical limitation, as when Obi-wan says "if droids could think, none of us would be here."
But in ST, this could be the case as well, as in BoBW (TNG6) when Hugh says that Lore didn't have the slightest idea how to carry out his promise to create an artificial brain that would replace biological ones.
To be fair on STTNG, those computers answer questions that would really take quite a bit of thought (or magic ;)) to be able to do on a quite regular basis.
They also seem intelligent enough to never have problems determining the context of humanoid actions (such as the oh-so-impossible way the doors work: they only open when you want to leave, they don't open when you just happen to move close by or the way questions asked of the computer are almost always interpreted as the character means and not as he says)
In "To the nth Power," Barclay-Enterprise was able to do a lot more than the Enterprise computer could by itself. In another episode, likewise, even small mechanical probes became protected sentient life-forms, so this would figure that the computer was specifically programmed to avoid taking on a mind of its own.
In comparison, about the only SW droid capable of independent thought is R2-D2, who seems to be the "Herbie" of 'droids, i.e. very unusual; he's said to be "a very well put-together 'droid," indicating that he's wired differently than others; for example, in Episode 1, he's able to create his own method to bypass the ship's power-generator to get the shield's back online.
In TNG, meanwhile, there seems to be a lot of inconsistency; for example holograms are frequently become sentient human beings when something happens that isn't planned (such as the program being heuristic or experimental).
Post a Comment