SQL Error [ ORA-00932: inconsistent datatypes:

To program consciousness: reconciling the portrayal of AI in Trek

This is an indirect response to recent conversations that have centered around the question of artificial sentience, particularly in holograms. I’ve been reading DaystromInstitute for a while, but after being provoked into hours of thought and research, I decided to submit my first real long form contribution. I hope it continues a stimulating conversation!
The prevalence of artificial intelligence has been accelerating in our contemporary real world. Our machines can beat human masters of chess and go, learn to play Super Mario World, drive cars, and—over at Google—they may (or may not) have just recently passed the Turing Test during some natural-language phone conversations. If the Federation represents a society that is extrapolated from our own, it is natural to ask what happens to AI in that future.
Before we dive in, I have to lay down a few disclaimers:
  1. The discussion I’m presenting is technical at times, but I hope it will still be accessible to most readers. I’m particularly interested in thoughts from any fellow coders who have more expertise in AI (I am but a lowly front-end developer).
  2. The source material is, for the purposes of detailed theorycrafting on this subject, sketchy—and that’s being kind. Terms such as “program,” “algorithm,” “circuitry,” “subroutine,” and “matrix” are injected into the dialogue with little regard for their significance in computer science.
For those two reasons, I feel we should try to keep our technical arguments as high-level as possible, when possible. Instead of quibbling over the technobabble used in a particular scene, my goal is to focus on the overall state of attitudes, capabilities, and policies used to portray our artificial friends in the Federation. From scheming computer simulations to androids that can’t cannot use simple contractions, we see a highly inconsistent variety of Federation AI in Star Trek. How are we to make sense of this from an in-universe perspective? I’ll also examine how our understanding of AI and machine learning today might fit into those portrayals.

Shouldn’t Skynet Be Killing Us By Now?

“The word you're looking for is ‘unnatural,’ meaning not from nature. ‘Freak’ or ‘monster’ would also be acceptable.”
- Julian Bashir, on human genetic modification (DS9 “Dr. Bashir, I Presume”)

I’ve read several comments here that suggest Strong AI—that is, a machine that is conscious and can think just like a human—isn’t really that hard, or at least, that it seems inevitable given the pace of our computational progress. Regardless of your position there, the relevant question here in Daystrom is: does the Federation get there by the mid-24th century?
We have a few data points we can use to map a potential trajectory.
Starting with today, we already have algorithms that can parse natural language, so the bountiful examples of verbal interaction with the ship’s computer, for instance, don’t seem far fetched at all. We’re only just starting figure out what it takes, however, for an AI to do more than respond to a single inquiry. (More on this in a bit.) One of the largest challenges—one that some computer scientists feel is nigh insurmountable—is for an AI to be able to understand, to attach meaning to its data and represent it as knowledge.
It’s clear to me that, by the time we approach TOS, Starfleet computers are capable of this form of sapient AI. Through a natural language exchange, Michael Burnham convinces her ship’s computer to change its mind and assist her to escape confinement (DIS “Battle of the Binary Stars”). The computer has to understand her reasoning, follow her logic over several statements, and come to an ethical conclusion based on situational context.
Curiously, we don’t see this depth of understanding from any other Starfleet computer henceforth. What gives?
I propose the answer might lie with this sub’s namesake: Dr. Richard Daystrom and his M-5 computer. During a disastrous turn of events, Daystrom has a semantic argument very similar to Burnham’s, a desperate attempt to convince the machine to alter its behavior. Ultimately, Kirk takes over and concludes the argument successfully by leading M-5 to understand the consequences of its actions and to take responsibility for them (TOS “The Ultimate Computer”). It seems reasonable to conclude that the Federation, reeling from the complete destruction of a Starfleet ship and crew at the hands of a murderous sapient computer, found impetus to establish restrictions on the development of Strong AI. To create an intelligence, let alone a living consciousness, that could have such consequential agency over the lives of its citizens would be an unethical act in the eyes of the Federation, given the risks.
I therefore argue that most of the Federation computers we see, starting with TOS, exhibit limited AI not because of a lack of technological progress, but because the potential for harmful or ethically questionable consequences (like those of eugenics, on which the Federation takes a similar stance) far outweighs the potential benefits.
It’s a fun exercise to revisit scenes involving computer interaction with this in mind. For example, one of the first times we hear the computer of the Enterprise-D, Riker seems flummoxed at its genteel manner (TNG “Encounter at Farpoint”).
COMPUTER: The next hatchway on your right.
RIKER: Thank you.
COMPUTER: You're welcome, Commander Riker. And if you care to enter, Commander?
RIKER: I do.
There’s a tone of impatience in his voice. I can imagine Riker internally rolling his eyes, wondering what historically-ignorant engineer thought it would be a good idea to give the computer such personality—not because it wasn’t useful or wonderful or advanced, but because it was distasteful. (It would be analogous to a Federation doctor advertising that babies delivered in their practice grow up to have higher IQs—not eugenics exactly, but probably not the PR you want.) In my headcanon, this is the reason the Enterprise computer is later changed to the simpler, dispassionate verbal interface we all know and love.

Mad Science? More Like Mad Props

“Lal may be a technological step forward in the development of artificial intelligence.”
- Anthony Haftel (TNG “The Offspring”)

The problem with this thesis is that we see Federation scientists either pursuing the development of Strong AI or willfully disregarding any such ethical concerns around the potential of its creation. We could discuss exocomps (TNG “The Quality of Life”), renegade missile guidance systems (VOY “Dreadnought”), or hell, the simple fact that a holodeck program autonomously generated a self-aware hologram due to a slip of the tongue (TNG “Elementary, Dear Data”). But I would be woefully remiss, of course, if I didn’t address the 100-kilo android in the room.
Of all the attempts to create an artificial human-like intelligence, Noonien Soong’s was the most overt. This does not necessarily counter my hypothesis; we can imagine Dr. Soong had an overriding motivation to ignore Federation ethical regulations. And we can reason similarly with Torres, who wasn’t a fan of the Federation at the time, or with Dr. Farallon, whose life’s work would be jeopardized by such restrictions. Indeed, any one individual with enough self-justification might decide not to heed the potential hazards of unleashing fully sentient robot overlords.
But even beyond one mad scientist’s zeal—based on some of the reactions we see—their colleagues appreciate the technology and thank them for it. Riker becomes so smitten with a realistic hologram that he falls into despair when he loses the chance to be with her (TNG “10010011”). Pioneers such as Soong, Daystrom, and Ira Graves are highly lauded, and the last two are even recognized with institutional prizes for their work. And the question I wrestled with most of all: if the Federation had put restrictions on the development of potentially sentient AI, why later on is Bruce Maddox’s ambition to duplicate Data met (at first) with general enthusiasm and interest? (TNG “The Measure of a Man”)
Initially, I felt like these observations unravelled my theory. The problem would still remain, though, of how to reconcile the highly variable portrayal of AI across the Star Trek corpus from an in-universe perspective. Should we just throw in the towel and chalk it up to what the writers understood of computation at the time?
No! This is DaystromInstitute! I eventually realized I wasn’t getting nerdy enough. Looking at what we know today about AI and machine learning might offer some possible solutions. And because I delight in shameless nerdery, I must plead for your indulgence and digress into a brief foray into real-world computer science.

A Rather Simpler Summary of Machine Learning

“For me, it's rather simple. While I'm faced with a decision, my program calculates the variables, and I take action.”
- The Doctor (VOY “Latent Image”)

Indeed, “machine learning” is the hot trendy term, and for good reason: it’s the current approach giving us the most effective results in AI today. I’ll try not to get more technical than is needed to inform my points, and if you’d rather, you can optionally just skip to the last paragraph in this section. But here it is in a nutshell:
Most computer programs involve receiving an input and providing an output. If we have an input x and an output y, it is almost trivial to write a computer program that takes x, runs it through an equation—let’s say “2x+6”—and spits out the output. Given an input of 2, we get 10 as the output. Importantly, we know what we’re doing when we specify “2x+6” as the function; we know what a linear algebraic expression is, what it means mathematically, and how it can be applied to real problems.
In machine learning, we replace a simple mathematical function with a much more complicated framework: a neural network. As you might imagine, it involves lots of connections and many variables (and if you can’t, here’s a diagram), but it ultimately still takes an input and gives an output. For example, if I am texting on my phone, there’s an autocorrect program running that takes every word I type as an input and outputs suggestions or corrections. If I input the word “potsto” this program might give the output “potato.”
Because neural networks can be set up to be very complex, they are capable of taking almost anything that we can represent digitally as input! We can throw entire sentences, images, or sounds and train the neural network by specifying what we expect as output. With recent advances with recurrent neural networks (RNNs), we can also set the process to feed back onto itself so that outputs can be included with the inputs, allowing the network to have a “memory” of sorts, based on what it’s processed so far. (Looks a bit like this; note the loopy arrows.)
An important point, for our discussion, is how these neural networks are programmed. Instead of a human programming each connection (and there might be a lot!), we set the program to train itself over and over on a set of inputs that are tested against their “right answer” which is provided initially. At first, all the connections are essentially random and the system performs very poorly. After each trial, the program adjusts variables in the network so that next time, it’s closer to getting a right answer. After a ton of testing, and a wide variety of inputs, the program will have attempted to set up these neural connections so that given a brand new input (the word “ptotao” perhaps), it still provides the expected output (“potato”).
For a slightly more thorough (but still accessible) explanation, I recommend this 9-minute CGP Grey video (and its more relevant follow-up). For the mathematically-inclined, this 20-minute overview by 3Blue1Brown is a popular reference. There are also some fun examples of RNN-generated outputs in this blog post by researcher Andrej Karpathy.
So here’s the kicker: we have no idea how to describe what is happening between the input and the output in a machine-taught neural network. We didn’t program it; the computer did. Sure, we can see what values the variables are set to, but unlike “2x+6” we have no concept around what those variables or the state of the neural network means. We know what inputs to give it, what to expect as outputs, and we can give it a helpful label so we know what it does (e.g. “Autocorrect algorithm”), but we don’t know how it does what it does. Given the code for any one neural network, it would be impossible to tell what it does until we actually ran the program.

The Known Unknown

“Complex systems can sometimes behave in ways that are entirely unpredictable. The human brain, for example, might be described in terms of cellular functions and neurochemical interactions. But that description does not explain human consciousness, a capacity that far exceeds simple neural functions. Consciousness is an emergent property.”
- Data (TNG “Emergence”)

If we assume that Federation computers are still programmed in a similar fashion as to how we program our computers today (and I do not pretend there isn’t room to argue otherwise), then it is reasonable to expect that machine learning with neural networks continues to progress to the point where we can have conversations, debates, and natural social interaction with our computers. Whatever advantages “duotronic” or “isolinear” circuitry provide in computational speed and memory, we can imagine that this may allow for a neural network complex enough to understand the meaning of its inputs and outputs: sapient-seeming AI.
But given the machine learning paradigm, this means that we do not, cannot know the detailed workings of these networks. We understand the mechanisms but not the meaning. Does the state of this neural network mean that the computer really does understand the meaning of its functions, or does it only simulate true sapience?
If we presume that neural networks are the basis for holographic AI, we find similar ambiguity surrounding the same kinds of questions. We see this with Moriarty…
PICARD: We spent some time investigating how you became self-aware. Frankly, it still remains a mystery. (TNG “Ship in a Bottle”)
… and, of course, with the Doctor:
ARBITRATOR: The Doctor exhibits many of the traits we associate with a person. Intelligence, creativity, ambition, even fallibility. But are these traits real, or is the Doctor merely programmed to simulate them? To be honest, I don't know. (VOY “Author, Author”)
These lines, on their surface, seem like the writers copped out and didn’t want to take a hard stance on a very technical issue. But given the “unknowability” aspect of implementing a neural network, these comments take on new weight. If Federation computer scientists have no way to measure when the line between a normal computer program and an artificial consciousness is being crossed, it makes sense from an ethical standpoint that the Federation, as a society, should simply avoid ever getting close.
If the Federation has ethical restrictions on these algorithms, we can imagine that they would need to be specific: perhaps neural networks beyond a certain complexity are banned, or the amount of memory an algorithm can use is limited. These constraints might even be worked into the hardware. In instances where we see holograms that seem to be self-aware, even conscious, there are clues as to how they may have got around these restrictions.
Exhibit A: The self-aware Minuet is programmed by the Bynars (or it may be more accurate to say that the Bynars programmed the computer to program Minuet). As extreme computer experts, it’s possible that they developed a method for programming holographic AI that produced sentient-seeming characters that nevertheless stayed within the letter of Federation law and technical restrictions. When Riker comments on how “real” she seems, it’s not just because he’s surprised a holodeck simulation is capable of it, but because it is capable within the imposed limitations of Federation protocol. I propose that the Bynar’s method presents an advance in holographic AI that becomes more widely implemented, one that is accidentally triggered by LaForge in “Elementary, Dear Data,” and that is later used for…
Exhibit B: Voyager’s EMH is allowed to stay running for much longer than a normal hologram, allowing the neural network more time to process, and memory expansions to his program are meted out over the years. This non-standard procedure may have pushed into an edge case of Federation protocols, effectively breaking them. As a side note, we learn that hologram AI is modular:
EMH: How much has to be left behind?
SEVEN: Twelve megaquads.
EMH: I suppose you could get rid of my athletic abilities and my grand master chess program.
SEVEN: That leaves three megaquads. Your painting skills?
EMH: Oh, if you must. (VOY “Life Line”)
Exhibits C and D: Vic Fontaine and Dr. Lewis Zimmerman’s assistant Haley are apparently both programmed to be self-aware by design. Either the Federation ethics board didn’t mind that there were loopholes in its restrictions, or perhaps the ethics themselves changed with the times. The circumstances in which we see these two holograms are during the Dominion War and terminal illness, both a source of emotional trauma. It’s possible that these two are judged to provide mental health benefits that outweigh any esoteric moral discomfort around the potential consciousness of the holograms.

Everything is a Social Construct Anyway

“I have brought a new life into this world, and it is my duty, not Starfleet's, to guide her through these first difficult steps to maturity, to support her as she learns, to prepare her to be a contributing member of society.”
- Data, on his android daughter Lal (TNG “The Offspring”)

So what about all the other artificial life forms that have arisen within the Federation aside from holograms? We can look to exocomps, or whatever the cyber-shenanigans Ira Graves was up to, but once again, the Soong-type androids give us a good example to study. It seems that this hypothetical restriction on Strong AI doesn’t apply to them, so why not?
First of all, the medium in which they are formed is probably entirely different from your typical 24th century computer. The positronic brain is similar enough to a real human brain that it can host a human consciousness (TNG “The Schizoid Man”, “Power Play”), as well as be read by an empath (TNG “The Offspring”, “Descent”). The precise format of the machine (about which we could conjecture endlessly) may simply be too exotic to have been included in Federation rules. We also have no idea what “programming” on such a platform actually entails. It is possible that artificial brains are somehow easier to directly program compared to the neural networks we use in computational AI:
KORBY: Can you understand that a human converted to an android can be programmed for the better? Can you imagine how life could be improved if we could do away with jealousy, greed, hate? (TOS “What Are Little Girls Made Of?”)
Secondly—and this should have become obvious to me far sooner than it did—there is the practical matter of power and agency. By their nature, computers are depended upon to run systems, sometimes very powerful or critically important systems. An android, though superhuman, is still a single individual who fits into its society as such. We must accept that individuals are fallible; the control system for a starship, however... maybe we’d like a little more infallibility there, please?
One may also argue similarly for holograms, if we observe that they are not designed to control mission-critical systems, but are mostly commonly used for entertainment. Holodecks have been described as having independent hardware (VOY “Parallax”), so maybe the same is true for its software, through virtual-machine-like sandboxing. (Or at least, perhaps that precaution was developed after a renegade hologram took over complete control of a starship … twice.) This would make sense, given the apparent modularity and portability of holograms. Nevertheless, with holodeck safeties failing every other day, it would be prudent to at least have both policies and design philosophies to avoid inadvertently creating artificial life on your lunch break. Such philosophies may also contribute to why it takes Janeway roughly seven years longer than Picard to recognize the sentient AI aboard, the subject of a recent Trekspertise episode.
In the end, the Federation must balance its pursuit of knowledge and its search for new life with its ethical obligations to all life, natural or artificial. For my proposed restriction on AI research to serve as a valid solution, I believe it would need to be conservative and limited in scope. The ultimate goal is for individuals of all origins—even man-made ones—to be duly recognized with fundamental rights and freedoms, for computers to remain sophisticated tools in the hands of their masters, and to maintain a strong (albeit sometimes fuzzy) dividing line between the two.
If you made it all the way through that, I appreciate your interest and attention! I hope to hear some feedback: do you think these ideas hold water? What examples or conclusions did I miss? (I imagine there are some big ones!) And are there other ways to reconcile the inconsistent portrayal of AI from an in-universe perspective?
submitted by ikidre to DaystromInstitute [link] [comments]

ORA-00 932 inconsistent datatypes expected NUMBER got LONG 00932 00000 - inconsistent datatypes expected s got s And your first thought was to check the documentation for the UPPER function Purpose UPPER returns char, with all letters uppercase char can be any of the datatypes CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB The return value is the issue i was facing is -ORA-00932: inconsistent datatypes: expected DATE got NUMBER. ---Here is the trace INSERT INTO RPM_FUTURE_RETAIL ( FUTURE_RETAIL_ID, ITEM, DEPT, CLASS, SUBCLASS, @DmitryTsechoev, ora-00932 inconsistent datatypes expected number got binary – Tony Sep 4 '14 at 14:31 a workaround for this would be to create 2 separate methods and specify null inside SQL query string – WeMakeSoftware Sep 4 '14 at 14:38 SQL> insert into sales values (1,2016-06-23,3600); insert into sales values (1,2016-06-23,3600) * ERROR at line 1: ORA-00932: inconsistent datatypes: expected DATE got NUMBER What are the correct date and time commands? It is a problem in the execution function to handle "LONG RAW" datatypes. This problem occurs only when the "Run on Server" option is used. Under this option, a script is created to run the SQL and this script helps shorten the overall time required for the execution as it fetches all records without sending them back to the client side (network travel is hence avoided).

[index] [31018] [23194] [13464] [15203] [25966] [25165] [22697] [25995] [29412] [2911]

Flag Counter