0 Members and 2 Guests are viewing this topic.
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.
Quote from: hamdani yusuf on 22/09/2019 04:12:47Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.You are talking about disparate systems. They are also human centric and not universe centric.
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount ofinformation required to represent a process. Let's say you have a design for a system (for example, a computerprogram or a computer-assisted design file for a computer), which can be described by a data file containing onemillion bits. We could say your design has a complexity of one million bits. But suppose we notice that the one millionbits actually consist of a pattern of one thousand bits that is repeated one thousand times. We could note therepetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducingthe size of the file by a factor of about one thousand.The most popular data-compression techniques use similar methods of finding redundancy within information.3But after you've compressed a data file in this way, can you be absolutely certain that there are no other rules ormethods that might be discovered that would enable you to express the file in even more compact terms? For example,suppose my file was simply "pi" (3.1415...) expressed to one million bits of precision. Most data-compressionprograms would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binaryexpression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or thatportion of it) very compactly as "pi to one million bits of accuracy." Since we can never be sure that we have notoverlooked some even more compact representation of an information sequence, any amount of compression sets onlyan upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity alongthese lines. He defines the "algorithmic information content" (Ale) of a set of information as "the length of the shortestprogram that will cause a standard universal computer to print out the string of bits and then halt."4However, Gell-Mann's concept is not fully adequate. If we have a file with random information, it cannot becompressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random.However, if any random sequence will do for a particular design, then this information can be characterized by asimple instruction, such as "put random sequence of numbers here." So the random sequence, whether it's ten bits orone billion bits, does not represent a significant amount of complexity, because it is characterized by a simpleinstruction. This is the difference between a random sequence and an unpredictable sequence of information that haspurpose.To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were tocharacterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of every atom in therock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 1025 atoms which, as I willdiscuss in the next chapter, can hold up to 1027 bits of information. That's one hundred million billion times moreinformation than the genetic code of a human (even without compressing the genetic code).5 But for most commonpurposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock formost purposes with far less information just by specifying its shape and the type of material of which it is made. Thus,it is reasonable to consider the complexity of an ordinary rock to be far less than that of a human even though the rocktheoretically contains vast amounts of information.6One concept of complexity is the minimum amount of meaningful, non-random, but unpredictable informationneeded to characterize a system or process.In Gell-Mann's concept, the AlC of a million-bit random string would be about a million bits long. So I am addingto Gell-Mann's AlC concept the idea of replacing each random string with a simple instruction to "put random bits"here.However, even this is not sufficient. Another issue is raised by strings of arbitrary data, such as names and phonenumbers in a phone book, or periodic measurements of radiation levels or temperature. Such data is not random, anddata-compression methods will only succeed in reducing it to a small degree. Yet it does not represent complexity asthat term is generally understood. It is just data. So we need another simple instruction to "put arbitrary data sequence"here.To summarize my proposed measure of the complexity of a set of information, we first consider its AlC as Gell-Mann has defined it. We then replace each random string with a simple instruction to insert a random string. We thendo the same for arbitrary data strings. Now we have a measure of complexity that reasonably matches our intuition.
There is a virtual universe in your head.
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[39] Kurzweil believes that the singularity will occur by approximately 2045.[40] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.
Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.Judea Pearl
COBOL, a 60-year-old computer language, is in the COVID-19 spotlightAs state governments seek to fix overwhelmed unemployment benefit systems, they need programmers skilled in a language that was passé by the early 1980s.Some states have found themselves in need of people who know a 60-year-old programming language called COBOL to retrofit the antiquated government systems now struggling to process the deluge of unemployment claims brought by the coronavirus crisis.The states of Kansas, New Jersey, and Connecticut all experienced technical meltdowns after a stunning 6.6 million Americans filed for unemployment benefits last week.They might not have an easy time finding the programmers they need. There just aren’t that many people around these days who know COBOL, or Common Business-Oriented Language. Most universities stopped teaching the language back in the 1980s. COBOL is considered a relic by younger coders.“There’s really no good reason to learn COBOL today, and there was really no good reason to learn it 20 years ago,” says UCLA computer science professor Peter Reiher. “Most students today wouldn’t have ever even heard of COBOL.”Meanwhile, because many banks, large companies, and government agencies still use the language in their legacy systems, there’s plenty of demand for COBOL programmers. A search for “COBOL Developer” returned 568 jobs on Indeed.com. COBOL developers make anywhere from $40 to more than $100 per hour.Kansas governor Laura Kelley said the Kansas Department of Labor was in the process of migrating systems from COBOL to a newer language, but that the effort was postponed by the virus. New Jersey governor Phil Murphy wondered why such an old language was being used on vital state government systems, and classed it with the many weaknesses in government systems the virus has revealed.The truth is, organizations often hesitate to change those old systems because they still work, and migrating to new systems is expensive. Massive upgrades also involve writing new code, which may contain bugs, Reiher says. In the worst-case scenario, bugs might cause the loss of customer financial data being moved from the old system to the new.IT STILL WORKS (MOSTLY)COBOL, though ancient, is still considered stable and reliable—at least under normal conditions.The current glitches with state unemployment problems are “probably not a specific flaw in the COBOL language or in the underlying implementation,” Reiher says. “The problem is more likely that some states are asking their computer systems to work with data on a far higher scale, he said, and making the systems do things they’ve never been asked to do.”COBOL was developed in the early 1960s by computer scientists from universities, mainframe manufacturers, the defense and banking industries, and government. Based on ideas developed by programming pioneer Grace Hopper, it was driven by the need for a language that could run on a variety of different kinds of mainframes.“It was developed to do specific kinds of things like inventory and payroll and accounts receivable,” Reiher told me. “It was widely used in 1960s by a lot of banks and government agencies when they first started automating their systems.”Here in the 21st century, COBOL is still quietly doing those kinds of things. Millions of lines of COBOL code still run on mainframes used in banks and a number of government agencies, including the Department of Veterans Affairs, Department of Justice, and Social Security Administration. A 2017 Reuters report said 43% of banking systems still use COBOL.But the move to newer languages such as Java, C, and Python is making its way through industries of all sorts, and will eventually be used in new systems used by banks and government. One key reason for the migration is that mobile platforms use newer languages, and they rely on tight integration with underlying systems to work the way users expect.The coronavirus will be a catalyst for a lot of changes in the coming years, some good, some bad. The migration away from the programming languages of another era may be one of the good ones.
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.