0 Members and 1 Guest are viewing this topic.
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.
Quote from: hamdani yusuf on 22/09/2019 04:12:47Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.You are talking about disparate systems. They are also human centric and not universe centric.
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount ofinformation required to represent a process. Let's say you have a design for a system (for example, a computerprogram or a computer-assisted design file for a computer), which can be described by a data file containing onemillion bits. We could say your design has a complexity of one million bits. But suppose we notice that the one millionbits actually consist of a pattern of one thousand bits that is repeated one thousand times. We could note therepetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducingthe size of the file by a factor of about one thousand.The most popular data-compression techniques use similar methods of finding redundancy within information.3But after you've compressed a data file in this way, can you be absolutely certain that there are no other rules ormethods that might be discovered that would enable you to express the file in even more compact terms? For example,suppose my file was simply "pi" (3.1415...) expressed to one million bits of precision. Most data-compressionprograms would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binaryexpression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or thatportion of it) very compactly as "pi to one million bits of accuracy." Since we can never be sure that we have notoverlooked some even more compact representation of an information sequence, any amount of compression sets onlyan upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity alongthese lines. He defines the "algorithmic information content" (Ale) of a set of information as "the length of the shortestprogram that will cause a standard universal computer to print out the string of bits and then halt."4However, Gell-Mann's concept is not fully adequate. If we have a file with random information, it cannot becompressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random.However, if any random sequence will do for a particular design, then this information can be characterized by asimple instruction, such as "put random sequence of numbers here." So the random sequence, whether it's ten bits orone billion bits, does not represent a significant amount of complexity, because it is characterized by a simpleinstruction. This is the difference between a random sequence and an unpredictable sequence of information that haspurpose.To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were tocharacterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of every atom in therock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 1025 atoms which, as I willdiscuss in the next chapter, can hold up to 1027 bits of information. That's one hundred million billion times moreinformation than the genetic code of a human (even without compressing the genetic code).5 But for most commonpurposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock formost purposes with far less information just by specifying its shape and the type of material of which it is made. Thus,it is reasonable to consider the complexity of an ordinary rock to be far less than that of a human even though the rocktheoretically contains vast amounts of information.6One concept of complexity is the minimum amount of meaningful, non-random, but unpredictable informationneeded to characterize a system or process.In Gell-Mann's concept, the AlC of a million-bit random string would be about a million bits long. So I am addingto Gell-Mann's AlC concept the idea of replacing each random string with a simple instruction to "put random bits"here.However, even this is not sufficient. Another issue is raised by strings of arbitrary data, such as names and phonenumbers in a phone book, or periodic measurements of radiation levels or temperature. Such data is not random, anddata-compression methods will only succeed in reducing it to a small degree. Yet it does not represent complexity asthat term is generally understood. It is just data. So we need another simple instruction to "put arbitrary data sequence"here.To summarize my proposed measure of the complexity of a set of information, we first consider its AlC as Gell-Mann has defined it. We then replace each random string with a simple instruction to insert a random string. We thendo the same for arbitrary data strings. Now we have a measure of complexity that reasonably matches our intuition.
There is a virtual universe in your head.
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.