Naked Science Forum

On the Lighter Side => New Theories => Topic started by: hamdani yusuf on 21/09/2019 09:50:36

Title: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2019 09:50:36
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.

Title: Re: How close are we from building a virtual universe?
Post by: jeffreyH on 21/09/2019 11:06:32
How detailed should the virtual universe be? Does it only include the observable universe? Depending upon the detail and scale it could require more information to describe it than the universe actually contains.

A better model would study a well defined region of the universe such as a galaxy cluster. However, this would still depend upon the level of detail.
Title: Re: How close are we from building a virtual universe?
Post by: evan_au on 21/09/2019 22:37:58
There is a definite tradeoff between level of detail, computer power and memory storage.

If you have a goal of studying the general shape of the universe, it is important to have dark matter and normal matter (which clumps into galaxies). But modelling individual stars is not needed.

If you are studying the shape of the galaxy, you don't need to model the lifecycle of the individual stars.

If you are studying the orbits of the planets around the Sun, you don't need to model whether or not Earth hosts life.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/09/2019 04:12:47
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA, social media.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/09/2019 04:23:45
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
Title: Re: How close are we from building a virtual universe?
Post by: jeffreyH on 22/09/2019 13:56:17
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.

You are talking about disparate systems. They are also human centric and not universe centric.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/09/2019 04:34:54
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.

You are talking about disparate systems. They are also human centric and not universe centric.
They are disparate now, but there are already efforts to integrate them. Some ERP systems have been connected to Plant Information Management System, which in turn can be connected to SCADA, DCS, PLC, and even smart field devices, such as transmitter, control valve positioners and variable speed drives.
What we need is a common platform to store those information in the same or compatible format, so any update in one subsystem can be automatically update in related subsystems to guarantee data integrity. The common platform must also take care of user accountability and data accessibility.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2019 11:20:57
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/10/2019 14:20:30
My basic idea for building a virtual universe is representing physical objects as nodes which are then organized in hierarchical structure. It is like a Unix feature, where everything is a file. But here, everything is a node.
To address is-ought problem, another hierarchical structure is created to represent desired/designed conditions.
A relationship table is created to show assignment of physical objects to designed objects. It also saves additional relationship types between them if necessary. Another relationship tables are added to show relationships among nodes other than the main hierarchical structures.
Another hierarchical structure is created to represent activities/events, which are basically any changes of nodes in those hierarchical structures of physical and desired objects. The activity nodes have timestamps for start and finish.
I have built a prototype for this system based on a DCS configuration database, which are then expanded to accomodate other things beyond I/O assignments, physical network, and control strategies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:10:31
The universe as we know it is a dynamic system, which means it changes in time.  So, for a virtual universe to be useful, it also needs to be a dynamic system. Static systems such as paper maps or ancient human's cave painting can only have limited usage for narrow purposes.
Title: Re: How close are we from building a virtual universe?
Post by: Bored chemist on 23/01/2020 07:16:04
There is a virtual universe in your head.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:21:49
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Here is an interesting excerpts from Ray Kurzweil's book "Singularity Is Near" regarding order and complexity, which are closely related to information density.
Quote
Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount of
information required to represent a process. Let's say you have a design for a system (for example, a computer
program or a computer-assisted design file for a computer), which can be described by a data file containing one
million bits. We could say your design has a complexity of one million bits. But suppose we notice that the one million
bits actually consist of a pattern of one thousand bits that is repeated one thousand times. We could note the
repetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducing
the size of the file by a factor of about one thousand.
The most popular data-compression techniques use similar methods of finding redundancy within information.3
But after you've compressed a data file in this way, can you be absolutely certain that there are no other rules or
methods that might be discovered that would enable you to express the file in even more compact terms? For example,
suppose my file was simply "pi" (3.1415...) expressed to one million bits of precision. Most data-compression
programs would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binary
expression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.
But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or that
portion of it) very compactly as "pi to one million bits of accuracy." Since we can never be sure that we have not
overlooked some even more compact representation of an information sequence, any amount of compression sets only
an upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity along
these lines. He defines the "algorithmic information content" (Ale) of a set of information as "the length of the shortest
program that will cause a standard universal computer to print out the string of bits and then halt."4
However, Gell-Mann's concept is not fully adequate. If we have a file with random information, it cannot be
compressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random.
However, if any random sequence will do for a particular design, then this information can be characterized by a
simple instruction, such as "put random sequence of numbers here." So the random sequence, whether it's ten bits or
one billion bits, does not represent a significant amount of complexity, because it is characterized by a simple
instruction. This is the difference between a random sequence and an unpredictable sequence of information that has
purpose.
To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were to
characterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of every atom in the
rock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 1025 atoms which, as I will
discuss in the next chapter, can hold up to 1027 bits of information. That's one hundred million billion times more
information than the genetic code of a human (even without compressing the genetic code).5 But for most common
purposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock for
most purposes with far less information just by specifying its shape and the type of material of which it is made. Thus,
it is reasonable to consider the complexity of an ordinary rock to be far less than that of a human even though the rock
theoretically contains vast amounts of information.6
One concept of complexity is the minimum amount of meaningful, non-random, but unpredictable information
needed to characterize a system or process.
In Gell-Mann's concept, the AlC of a million-bit random string would be about a million bits long. So I am adding
to Gell-Mann's AlC concept the idea of replacing each random string with a simple instruction to "put random bits"
here.
However, even this is not sufficient. Another issue is raised by strings of arbitrary data, such as names and phone
numbers in a phone book, or periodic measurements of radiation levels or temperature. Such data is not random, and
data-compression methods will only succeed in reducing it to a small degree. Yet it does not represent complexity as
that term is generally understood. It is just data. So we need another simple instruction to "put arbitrary data sequence"
here.
To summarize my proposed measure of the complexity of a set of information, we first consider its AlC as Gell-
Mann has defined it. We then replace each random string with a simple instruction to insert a random string. We then
do the same for arbitrary data strings. Now we have a measure of complexity that reasonably matches our intuition.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:25:54
There is a virtual universe in your head.
Indeed, but it only covers a small portion of even the currently observable universe. A lot of information that I had ever known has already lost. In order to be useful for predicting events in the far future, we need a much larger and complex system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2020 08:22:35
Regarding the original question, it turns out that Ray Kurzweil has already predict the answer, which is around mid of this century.

Quote
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[39] Kurzweil believes that the singularity will occur by approximately 2045.[40] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.
https://en.wikipedia.org/wiki/Technological_singularity#Accelerating_change
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/02/2020 11:02:21
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

https://en.wikipedia.org/wiki/Graph_theory
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/02/2020 10:02:07
https://www.technologyreview.com/s/615189/what-ai-still-cant-do/
Quote
Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.

In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.
Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.

Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
Quote
AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.
Judea Pearl
A virtual universe can map commonly known cause and effect relationships to be used as library by AI agents, which will save a lot of time training them from the beginning everytime a new AI agent is assigned.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/03/2020 07:02:22
To achieve generality, an AI is required to adapt to various range of situations. It would be better to have modular structure for frequently used basic functions similar to the configuration of naturally occured brains. It must have some flexibility upon its own hyperparameters, which might require changes for executing different tasks.
To maintain its own integrity, and fight off data corruption or cyber attacks, the AI needs to spend some of its data storage and processing capacity to represent its own structure. This will create some sort of self awareness, which is a step towards artificial consciousness.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/04/2020 09:54:24
The article below has reminded me once again of the importance of having a universal modelling system/platform.
Quote
COBOL, a 60-year-old computer language, is in the COVID-19 spotlight
As state governments seek to fix overwhelmed unemployment benefit systems, they need programmers skilled in a language that was passé by the early 1980s.

Some states have found themselves in need of people who know a 60-year-old programming language called COBOL to retrofit the antiquated government systems now struggling to process the deluge of unemployment claims brought by the coronavirus crisis.

The states of Kansas, New Jersey, and Connecticut all experienced technical meltdowns after a stunning 6.6 million Americans filed for unemployment benefits last week.

They might not have an easy time finding the programmers they need. There just aren’t that many people around these days who know COBOL, or Common Business-Oriented Language. Most universities stopped teaching the language back in the 1980s. COBOL is considered a relic by younger coders.

“There’s really no good reason to learn COBOL today, and there was really no good reason to learn it 20 years ago,” says UCLA computer science professor Peter Reiher. “Most students today wouldn’t have ever even heard of COBOL.”

Meanwhile, because many banks, large companies, and government agencies still use the language in their legacy systems, there’s plenty of demand for COBOL programmers. A search for “COBOL Developer” returned 568 jobs on Indeed.com. COBOL developers make anywhere from $40 to more than $100 per hour.

Kansas governor Laura Kelley said the Kansas Department of Labor was in the process of migrating systems from COBOL to a newer language, but that the effort was postponed by the virus. New Jersey governor Phil Murphy wondered why such an old language was being used on vital state government systems, and classed it with the many weaknesses in government systems the virus has revealed.

The truth is, organizations often hesitate to change those old systems because they still work, and migrating to new systems is expensive. Massive upgrades also involve writing new code, which may contain bugs, Reiher says. In the worst-case scenario, bugs might cause the loss of customer financial data being moved from the old system to the new.
IT STILL WORKS (MOSTLY)
COBOL, though ancient, is still considered stable and reliable—at least under normal conditions.

The current glitches with state unemployment problems are “probably not a specific flaw in the COBOL language or in the underlying implementation,” Reiher says. “The problem is more likely that some states are asking their computer systems to work with data on a far higher scale, he said, and making the systems do things they’ve never been asked to do.”

COBOL was developed in the early 1960s by computer scientists from universities, mainframe manufacturers, the defense and banking industries, and government. Based on ideas developed by programming pioneer Grace Hopper, it was driven by the need for a language that could run on a variety of different kinds of mainframes.

“It was developed to do specific kinds of things like inventory and payroll and accounts receivable,” Reiher told me. “It was widely used in 1960s by a lot of banks and government agencies when they first started automating their systems.”

Here in the 21st century, COBOL is still quietly doing those kinds of things. Millions of lines of COBOL code still run on mainframes used in banks and a number of government agencies, including the Department of Veterans Affairs, Department of Justice, and Social Security Administration. A 2017 Reuters report said 43% of banking systems still use COBOL.

But the move to newer languages such as Java, C, and Python is making its way through industries of all sorts, and will eventually be used in new systems used by banks and government. One key reason for the migration is that mobile platforms use newer languages, and they rely on tight integration with underlying systems to work the way users expect.

The coronavirus will be a catalyst for a lot of changes in the coming years, some good, some bad. The migration away from the programming languages of another era may be one of the good ones.

https://www.fastcompany.com/90488862/what-is-cobol

My previous job as a system integrator has given me first hand experience on this issue. Most of the projects I handeld were migration from an old/obsolete system to a newer one (mostly DCS). The most obvious advantage of these projects is that we have a system that is still working. The challenge that we had was translating the source code of the old systems into the new one. When they couldn't be translated as one to one correspondence, we need to use process control narration as intermediary. Often times we couldn't get access to the source code due to the oldness of the system, missing parts of documentation such as hardcopy of ladder diagram, function block diagram, sequential function chart, proprietary scripts, or due to corrupted floppy disks. So we had to rely on additional information provided by the process operators and supervisors about how the system was supposed to work.
On the other hand, in new systems we don't have the source code, so we have to translate from the control narrations provided by the process engineers. There is no guarantee that the system will work as intended. Often times, we had to make tweaking, adjustments, even some major modifications during the project commissioning.
If only we had a universal modelling system/platform, we could save a lot of time and effort to finish the projects. The system migrations could then be done automatically.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2020 05:29:32
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.
Quote
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
https://pathmind.com/wiki/neural-network
Quote
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.

That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2020 05:52:53
This Is What Tesla's Autopilot Sees On The Road.

Essentially, it builds a virtual environment in its computer based on input data from visual cameras and radar. With more of autopilot cars on the road, a lot of data being processed become redundant. Sharing those data can be the next step to increase efficiency of the whole system. It will require agreed protocol, data structure, and algorithm to interpret them properly. This brings us one step closer to a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/08/2020 11:10:14
We are increasingly rely on artificial intelligence to make decisions. But we must be aware of the risk that it poses, like those described in the article below.
https://thegradient.pub/shortcuts-neural-networks-love-to-cheat/
Quote
Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions. Shortcut opportunities come in many flavors and are ubiquitous across datasets and application domains. A few examples are visualized here:
(https://thegradient.pub/content/images/2020/07/image5-5.png)
At a principal level, shortcut learning is not a novel phenomenon: variants are known under different terms such as “learning under covariate shift”, “anti-causal learning”, “dataset bias”, the “tank legend” and the “Clever Hans effect”. We here discuss how shortcut learning unifies many of deep learning’s problems and what we can do to better understand and mitigate shortcut learning.

What is a shortcut?

In machine learning, the solutions that a model can learn are constrained by data, model architecture, optimizer and objective function. However, these constraints often don’t just allow for one single solution: there are typically many different ways to solve a problem. Shortcuts are solutions that perform well on a typical test set but fail under different circumstances, revealing a mismatch with our intentions.
Quote
Shortcut learning beyond deep learning

Often such failures serve as examples for why machine learning algorithms are untrustworthy. However, biological learners suffer from very similar failure modes as well. In an experiment in a lab at the University of Oxford, researchers observed that rats learned to navigate a complex maze apparently based on subtle colour differences - very surprising given that the rat retina has only rudimentary machinery to support at best somewhat crude colour vision. Intensive investigation into this curious finding revealed that the rats had tricked the researchers: They did not use their visual system at all in the experiment and instead simply discriminated the colours by the odour of the colour paint used on the walls of the maze. Once smell was controlled for, the remarkable colour discrimination ability disappeared.

Animals often trick experimenters by solving an experimental paradigm (i.e., dataset) in an unintended way without using the underlying ability one is actually interested in. This highlights how incredibly difficult it can be for humans to imagine solving a tough challenge in any other way than the human way: Surely, at Marr’s implementational level there may be differences between rat and human colour discrimination. But at the algorithmic level there is often a tacit assumption that human-like performance implies human-like strategy (or algorithm). This “same strategy assumption” is paralleled by deep learning: even if DNN units are different from biological neurons, if DNNs successfully recognise objects it seems natural to assume that they are using object shape like humans do. As a consequence, we need to distinguish between performance on a dataset and acquiring an ability, and exercise great care before attributing high-level abilities like “object recognition” or “language understanding” to machines, since there is often a much simpler explanation:

Never attribute to high-level abilities that which can be adequately explained by shortcut learning.
Quote
The consequences of this behaviour are striking failures in generalization. Have a look at the figure below. On the left side there are a few directions in which humans would expect a model to generalize. A five is a five whether it is hand-drawn and black and white or a house number photographed in color. Similarly slight distortions or changes in pose, texture or background don’t influence our prediction about the main object in the image. In contrast a DNN can easily be fooled by all of them. Interestingly this does not mean that DNNs can’t generalize at all: In fact, they generalize perfectly well albeit in directions that hardly make sense to humans. The right side of the figure below shows some examples that range from the somewhat comprehensible - scrambling the image to keep only its texture - to the completely incomprehensible.
(https://thegradient.pub/content/images/2020/07/image1.png)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2020 10:42:06
This patent by Tesla is a clue that in the future, virtual universe will be built mostly autonomously by AI.
https://www.tesmanian.com/blogs/tesmanian-blog/tesla-published-a-patent-generating-ground-truth-for-machine-learning-from-time-series-elements
Quote
Deep learning systems used for applications such as autonomous driving are developed by training a machine learning model. Typically, the performance of the deep learning system is limited at least in part by the quality of the training set used to train the model.

In many instances, significant resources are invested in collecting, curating, and annotating the training data. Traditionally, much of the effort to curate a training data set is done manually by reviewing potential training data and properly labeling the features associated with the data.

The effort required to create a training set with accurate labels can be significant and is often tedious. Moreover, it is often difficult to collect and accurately label data that a machine learning model needs improvement on. Therefore, there exists a need to improve the process for generating training data with accurate labeled features.

Tesla published patent 'Generating ground truth for machine learning from time series elements'

Patent filing date: February 1, 2019
Patent Publication Date: August 6, 2020

(https://cdn.shopify.com/s/files/1/0173/8204/7844/files/1_660faf20-c36a-4f67-8e63-11c5d4078119_1024x1024.jpg?v=1596750740)

The patent disclosed a machine learning training technique for generating highly accurate machine learning results. Using data captured by sensors on a vehicle a training data set is created. The sensor data may capture vehicle lane lines, vehicle lanes, other vehicle traffic, obstacles, traffic control signs, etc.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2020 10:59:59
Here is a research article about The information catastrophe.
https://aip.scitation.org/doi/10.1063/5.0019941
Quote
Currently, we produce ∼1021 digital bits of information annually on Earth. Assuming a 20% annual growth rate, we estimate that after ∼350 years from now, the number of bits produced will exceed the number of all atoms on Earth, ∼1050. After ∼300 years, the power required to sustain this digital production will exceed 18.5 × 1015 W, i.e., the total planetary power consumption today, and after ∼500 years from now, the digital content will account for more than half Earth’s mass, according to the mass-energy–information equivalence principle. Besides the existing global challenges such as climate, environment, population, food, health, energy, and security, our estimates point to another singular event for our planet, called information catastrophe.

(https://aip.scitation.org/na101/home/literatum/publisher/aip/journals/content/adv/2020/adv.2020.10.issue-8/5.0019941/20200810/images/small/5.0019941.figures.online.f3.gif)
Quote
In conclusion, we established that the incredible growth of digital information production would reach a singularity point when there are more digital bits created than atoms on the planet. At the same time, the digital information production alone will consume most of the planetary power capacity, leading to ethical and environmental concerns already recognized by Floridi who introduced the concept of “infosphere” and considered challenges posed by our digital information society.27 These issues are valid, regardless of the future developments in data storage technologies. In terms of digital data, the mass–energy–information equivalence principle formulated in 2019 has not yet been verified experimentally, but assuming this is correct, then in not the very distant future, most of the planet’s mass will be made up of bits of information. Applying the law of conservation in conjunction with the mass–energy–information equivalence principle, it means that the mass of the planet is unchanged over time. However, our technological progress inverts radically the distribution of the Earth’s matter from predominantly ordinary matter to the fifth form of digital information matter. In this context, assuming the planetary power limitations are solved, one could envisage a future world mostly computer simulated and dominated by digital bits and computer code.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2020 10:48:14
In this article we can see that data compression and decompression play central role in learning and modelling, no matter if they're done by machines or biological entities.
https://www.zdnet.com/google-amp/article/what-is-gpt-3-everything-business-needs-to-know-about-openais-breakthrough-ai-language-program/
Quote
When the neural network is being developed, called the training phase, GPT-3 is fed millions and millions of samples of text and it converts words into what are called vectors, numeric representations. That is a form of data compression. The program then tries to unpack this compressed text back into a valid sentence. The task of compressing and decompressing develops the program's accuracy in calculating the conditional probability of words.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2020 08:49:38
https://www.businessinsider.com/developer-sharif-shameem-openai-gpt-3-debuild-2020-9
Quote
In July, Debuild cofounder and CEO Sharif Shameem tweeted about a project he created that allowed him to build a website simply by describing its design. 

In the text box, he typed, "the google logo, a search box, and 2 lightgrey buttons that say 'Search Google' and 'I'm Feeling Lucky." The program then generated a virtual copy of the Google homepage.


This program uses GPT-3, a "natural language generation" tool from research lab OpenAI, which was cofounded by Elon Musk. GPT-3 was trained on massive swathes of data and can spit our results that mimic human writing. Developers have used it for creative writing, designing websites, writing business memos, and more. Now, Shameem is using GPT-3 for Debuild, a no-code tool for building web apps just by describing what they look like and how they work.

With this program, the user just needs to type in and describe what the application will look like and how it will work, and the tool will create a website based on those descriptions.

https://syncedreview.com/2020/09/10/openai-gpt-f-delivers-sota-performance-in-automated-mathematical-theorem-proving/
Quote
San Francisco-based AI research laboratory OpenAI has added another member to its popular GPT (Generative Pre-trained Transformer) family. In a new paper, OpenAI researchers introduce GPT-f, an automated prover and proof assistant for the Metamath formalization language.

While artificial neural networks have made considerable advances in computer vision, natural language processing, robotics and so on, OpenAI believes they also have potential in the relatively underexplored area of reasoning tasks. The new research explores this potential by applying a transformer language model to automated theorem proving.

It seems like in the future we will become less dependent on biological computational resources (i.e. brain).
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/10/2020 02:49:58
With virtual universe, we will get less surprises, so we can make better plans to achieve our goals. It can help us improve our survival chance which is a prerequisite (i.e. an instrumental goal) to achieve the universal terminal goal.
Here is one of the latest progress we have made to get closer to that goals.
https://scitechdaily.com/esas-%CF%86-week-digital-twin-earth-quantum-computing-and-ai-take-center-stage/
Quote
The third edition of the Φ-week event, which is entirely virtual, focuses on how Earth observation can contribute to the concept of Digital Twin Earth – a dynamic, digital replica of our planet which accurately mimics Earth’s behavior. Constantly fed with Earth observation data, combined with in situ measurements and artificial intelligence, the Digital Twin Earth provides an accurate representation of the past, present, and future changes of our world.

Digital Twin Earth will help visualize, monitor, and forecast natural and human activity on the planet. The model will be able to monitor the health of the planet, perform simulations of Earth’s interconnected system with human behavior, and support the field of sustainable development, therefore, reinforcing Europe’s efforts for a better environment in order to respond to the urgent challenges and targets addressed by the Green Deal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 04:21:10
As I mentioned earlier in another thread, cost saving is a universal instrumental goal. It also applies in AI research.
https://syncedreview.com/2020/10/02/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs/
Quote
It’s no coincidence that Transformer neural network architecture is gaining popularity across so many machine learning research fields. Best known for natural language processing (NLP) tasks, Transformers not only enabled OpenAI’s 175 billion parameter language model GPT-3 to deliver SOTA performance, the power- and potential-packed architecture also helped DeepMind’s AlphaStar bot defeat professional StarCraft players. Researchers have now introduced a way to make Transformers more compute-efficient, scalable and accessible.

While previous learning approaches such as RNNs suffered from vanishing gradient problems, Transformers’ game-changing self-attention mechanism eliminated such issues. As explained in the paper introducing Transformers — Attention Is All You Need, the novel architecture is based on a trainable attention mechanism that identifies complex dependencies between input sequence elements.

Transformers however scale quadratically when the number of tokens in an input sequence increases, making their use prohibitively expensive for large numbers of tokens. Even when fed with moderate token inputs, Transformers’ gluttonous appetite for computational resources can be difficult for many researchers to satisfy.

A team from Google, University of Cambridge, DeepMind, and Alan Turing Institute have proposed a new type of Transformer dubbed Performer, based on a Fast Attention Via positive Orthogonal Random features (FAVOR+) backbone mechanism. The team designed Performer to be “capable of provably accurate and practical estimation of regular (softmax) full rank attention, but of only linear space and timely complexity and not relying on any priors such as sparsity or low-rankness.”
Title: Re: How close are we from building a virtual universe?
Post by: mikahawkins on 12/10/2020 05:55:08
Are we trying to visualize something with lifeforms or without lifeforms ? I believe we can start off with one step at a time, first getting the solar system together then the galaxies and so on.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 07:29:44
Are we trying to visualize something with lifeforms or without lifeforms ? I believe we can start off with one step at a time, first getting the solar system together then the galaxies and so on.
It's a universal inevitability that in order to achieve the universal terminal goal, a conscious system will have to build some kind of virtual universe as close as possible to the real/objective reality in the universe, which can be described in terms of accuracy and precision. Due to limited resources, according to Pareto principle, we must spend more resources to things which have more impacts to the achievement of the universal terminal goal. That's why Google map has higher resolution for areas with high interest such as big cities compared to deserts or oceans.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 15:03:57
Talking about lifeform, how do you  define it? Would you call Henrietta Lack's tumor cells alive? What about corona virus? prion? Alexa?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/10/2020 12:42:09
A few decades ago, most process equipments were dumb. They need periodic maintenance performed by humans to diagnose their functional condition and find abnormalities. So basically their conditions are uncertain until they're broken or maintenance personnels check/test them. Control loops need to be periodically fine tuned to keep them in best performance due to physical changes in the field instrumentations and the process itself.
Now a lot of equipments are getting smart. Smart transmitters and positioners has been widely used. There are also smart variable speed drive and other equipment controllers. They have self diagnostic feature to tell techinicians wether or not they are in a good condition, and point out abnormalities so the problem can be fixed sooner. Those diagnostic data can be continuously monitored from a remote area. Those smart equipments can be considered to have some form of self awareness.
In a SCADA system, a bot can be deployed to continuously monitor functionality of each control loop. Thousand of them can run in the same server. This forces us to review traditional concept of individuality, especially regarding those conscious agents.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/10/2020 12:51:35
Here is an interesting article covering AGI.
https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/

Quote
  The tricky part comes next: yoking multiple abilities together. Deep learning is the most general approach we have, in that one deep-learning algorithm can be used to learn more than one task. AlphaZero used the same algorithm to learn Go, shogi (a chess-like game from Japan), and chess. DeepMind’s Atari57 system used the same algorithm to master every Atari video game. But the AIs can still learn only one thing at a time. Having mastered chess, AlphaZero has to wipe its memory and learn shogi from scratch.

Legg refers to this type of generality as “one-algorithm,” versus the “one-brain” generality humans have. One-algorithm generality is very useful but not as interesting as the one-brain kind, he says: “You and I don’t need to switch brains; we don’t put our chess brains in to play a game of chess.” 

Here are the steps toward development of AGI.
Quote
   Roughly in order of maturity, they are:
Unsupervised or self-supervised learning. Labeling data sets (e.g., tagging all pictures of cats with “cat”) to tell AIs what they’re looking at during training is the key to what’s known as supervised learning. It’s still largely done by hand and is a major bottleneck. AI needs to be able to teach itself without human guidance—e.g., looking at pictures of cats and dogs and learning to tell them apart without help, or spotting anomalies in financial transactions without having previous examples flagged by a human. This, known as unsupervised learning, is now becoming more common.

Transfer learning, including few-shot learning. Most deep-learning models today can be trained to do only one thing at a time. Transfer learning aims to let AIs transfer some parts of their training for one task, such as playing chess, to another, such as playing Go. This is how humans learn.

Common sense and causal inference. It would be easier to transfer training between tasks if an AI had a bedrock of common sense to start from. And a key part of common sense is understanding cause and effect. Giving common sense to AIs is a hot research topic at the moment, with approaches ranging from encoding simple rules into a neural network to constraining the possible predictions that an AI can make. But work is still in its early stages.

Learning optimizers. These are tools that can be used to shape the way AIs learn, guiding them to train more efficiently. Recent work shows that these tools can be trained themselves—in effect, meaning one AI is used to train others. This could be a tiny step toward self-improving AI, an AGI goal.




Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/10/2020 06:40:12
Here I found a very interesting video about neural network revolution that I'd like to share here.
Quote
Geoffrey Hinton is an Engineering Fellow at Google where he manages the Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. He is also the Chief Scientific Adviser of the new Vector Institute and an Emeritus Professor at the University of Toronto.

Recorded: December 4th, 2017
I see this neural network revolution as a continuation of neural network evolution that has been happening for hundreds of million years and produced brains which kickstarted the revolution.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/11/2020 04:11:48
The risk of using GPT irresponsibly is self confirmation bias which may obstruct from getting optimum results.

https://twitter.com/karpathy/status/1284660899198820352?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1284667692381872128%7Ctwgr%5Eshare_3%2Ccontainerclick_1&ref_url=https%3A%2F%2Fwww.technologyreview.com%2F2020%2F07%2F20%2F1005454%2Fopenai-machine-learning-language-generator-gpt-3-nlp%2F

Quote
Andrej Karpathy
@karpathy
·
Jul 19
By posting GPT generated text we’re polluting the data for its future versions
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2020 04:23:51
The dog's behavior is not entirely surprising either. Especially if you have some future version of neuralink implanted on its head, or you are a veterinarian.

Here is the definition of intelligence accorsing to dictionary.
Quote
  the ability to acquire and apply knowledge and skills.
Usually, it represents problem solving or information processing capability, but doesn't take into account the ability to manipulate its environment nor self awareness.
AlphaGo is considered intelligent since it can solve problem of playing go better then human champion. Alpha zero is even more intelligent since it can beat Alpha Go 100:0.
Even though they don't have the ability to move any piece of go.
On the other hand, consciousness covers more factors into account. For example, if you got paralyzed so you can't move your arms and legs, you are considered less conscious than your normal state, even though you can still think clearly.
Traditionally, an agent is considered intelligent if it can solve problem, especially when it's better than expectation. A dog who can get you newspaper is considered intelligent.

https://en.wikipedia.org/wiki/Artificial_intelligence
Quote
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]
Quote
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[70]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[3] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Perform actions mathematically similar to ones that succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[71] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[72] Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[73]

https://en.wikipedia.org/wiki/AI_effect
Quote
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.[1]

Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]
Quote
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.

Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."[4]

When IBM's chess playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence.[5] Fred Reed writes:

"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."[6]

Douglas Hofstadter expresses the AI effect concisely by quoting Larry Tesler's Theorem:

"AI is whatever hasn't been done yet."[7]

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as human-assisted Turing machine.[8]

AI applications become mainstream
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[9]

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[10]

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[11]

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[12]
Quote
Saving a place for humanity at the top of the chain of being
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[14] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[citation needed]

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[15]




I'd like to delve technically deeper into the problem in this thread.
Quote
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[12][13] followed by disappointment and the loss of funding (known as an "AI winter"),[14][15] followed by new approaches, success and renewed funding.[13][16] After AlphaGo successfully defeated a professional Go player in 2015, artificial intelligence once again attracted widespread global attention.[17] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[18] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[19] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[22][23][24] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[18]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[19] General intelligence is among the field's long-term goals.[25] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[26] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[31] Some people also consider AI to be a danger to humanity if it progresses unabated.[32][33] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[34]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[35][16]
Quote
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[71]
Quote
Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world.[citation needed] These learners could therefore derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is seldom possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering a broad range of possibilities unlikely to be beneficial.
https://en.wikipedia.org/wiki/Artificial_intelligence

Intelligent agents are expected to have the ability to learn from raw data. It means that they have tools to pre-process those raw data to filter out noises or flukes and extract useful information. When those agents interact with one another, especially when they must compete for finite resources, the more important is the ability to filter out misinformation. It requires an algorithm to determine if some data inputs are believable or not. At this point we are seeing that artificial intelligence is getting closer to natural intelligence. This exhibits a feature similar to critical thinking of conscious beings.
Descartes has pointed out that the only self evident information a conscious agent can get is its own existence. Any other information requires corroborating evidences to support it. So in the end, the reliability of an information will be measured/valued by its ability to help preserving conscious agents.


Quote
In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer.[1]

When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors.[1] After this learning step, a DBN can be further trained with supervision to perform classification.[2]

DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs)[1] or autoencoders,[3] where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set).

The observation[2] that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms.[4]:6 Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography,[5] drug discovery[6][7][8]).
https://en.wikipedia.org/wiki/Deep_belief_network
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2020 04:40:29
The video shows what the future will look like. It's a step closer toward building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: Xeon on 12/11/2020 10:23:13
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.


Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.
What law says that our memories are stored in our minds ! How do we know that we are not just accessing a mainframe server and we are no more than confused bots .
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/11/2020 09:28:18
What law says that our memories are stored in our minds ! How do we know that we are not just accessing a mainframe server and we are no more than confused bots .

There are no such law AFAIK. But here is what we know.
Descartes has pointed out that the only self evident information a conscious agent can get is its own existence. Any other information requires corroborating evidences to support it. So in the end, the reliability of an information will be measured/valued by its ability to help preserving conscious agents.
If two or more hypotheses are equally capable of explaining observations, Occam's razor suggests us to choose the simplest one. I've asserted in another thread that efficiency is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/11/2020 06:26:10
Brains contain some compressed and partial version of virtual universe in the form of neurons and neural connection states.  Object counting is a part of extracting information from raw data coming in through sensory organs. This video tells us how brains count.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2020 11:19:34
Here is a recent progress toward building a virtual universe.
https://singularityhub.com/2020/11/22/the-trillion-transistor-chip-that-just-left-a-supercomputer-in-the-dust/
Quote
The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”
Quote
Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.
Quote
Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 23/11/2020 12:30:24
The problem throughout is that you are trying to define the solution without defining the problem. My artificial horizon is an adequate virtual universe if the problem is to keep the plane flying straight and level with no visual reference. The GPS moving map adds just enough data if I want to get somewhere, and the ILS gives me a virtual beeline to the runway threshold. Each of these solutions began with a clear statement of the problem.

The joy of full autopilot was demonstrated by a couple of 737 fatal incidents in recent memory. It's OK until it goes wrong and crashes you precisely on the runway centerline, unlike the human who is generally "good enough" to land somewhere (like the middle of the Hudson river) without breaking too much. I've just completed a paper exercise where the radio died in fog at night. The automatic answer is to follow a published instrument approach on enhanced GPS and autopilot, which will take you to your destination within +/- a couple of feet. Problem is that you don't know who else is on that track, so the more closely you follow it, the more likely you are to collide or cause panic. The human  answer is to assume that everyone else is on track and avoid it by a mile laterally and 1000 ft vertically until the last possible moment.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2020 15:43:50
The problem throughout is that you are trying to define the solution without defining the problem. My artificial horizon is an adequate virtual universe if the problem is to keep the plane flying straight and level with no visual reference. The GPS moving map adds just enough data if I want to get somewhere, and the ILS gives me a virtual beeline to the runway threshold. Each of these solutions began with a clear statement of the problem.



This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I've stated the problem in another thread, which is to reduce the risk of existential threat to conscious beings down to zero. Building an accurate and precise virtual universe is one method to achieve that goal by reducing uncertainty and helping to make decisions effectively and efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2020 12:04:55
By building a virtual universe, we can make trial and error more efficiently. That's essentially what AlphaGo and AlphaZero have done to master the game. 

Quote
Learn from the mistakes of others, you can never live long enough to make them all yourself.
Groucho Marx   
Having an accurate and precise virtual universe, vaccine to a newly emerging virus could be developed in minutes instead of years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2020 15:05:38
AGI is the last invention humans ever need to make.
https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html
Quote
   Meet GPT-3. It Has Learned To Code (and Blog and Argue).
The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs.
Quote
  This summer, an artificial intelligence lab in San Francisco called OpenAI unveiled a technology several months in the making. This new system, GPT-3, had spent those months learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.

Mckay Wrigley, a 23-year-old computer programmer from Salt Lake City, was one of the few invited to tinker with the system, which uses everything it has learned from that vast sea of digital text to generate new language on its own. Mr. Wrigley wondered if it could imitate public figures — write like them, perhaps even chat like them. 
Quote
  One of his experiments involved a pop psychologist, Scott Barry Kaufman. The system took in Mr. Kaufman’s name and a topic for discussion: creativity. Then, when asked “How do we become more creative?” GPT-3 responded instantly:

I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.

Later, when Mr. Wrigley posted the paragraph on Twitter, somebody looped in the real Scott Barry Kaufman. He was stunned. “It definitely sounds like something I would say,” the real Mr. Kaufman tweeted, later adding, “Crazy accurate A.I.” 
Rapid advancement of AI with its exponential growth nature seems to hint that singularity is near.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2020 03:27:16
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

Virtual universe that I described previously is similar to knowledge graphs as described in the article below.

https://www.zdnet.com/article/rebooting-ai-deep-learning-meet-knowledge-graphs/
Quote
Rebooting AI: Deep learning, meet knowledge graphs
Gary Marcus, a prominent figure in AI, is on a mission to instill a breath of fresh air to a discipline he sees as in danger of stagnating. Knowledge graphs, the 20-year old hype, may have something to offer there.

"This is what we need to do. It's not popular right now, but this is why the stuff that is popular isn't working." That's a gross oversimplification of what scientist, best-selling author, and entrepreneur Gary Marcus has been saying for a number of years now, but at least it's one made by himself.

The "popular stuff which is not working" part refers to deep learning, and the "what we need to do" part refers to a more holistic approach to AI. Marcus is not short of ambition; he is set on nothing else but rebooting AI. He is not short of qualifications either. He has been working on figuring out the nature of intelligence, artificial or otherwise, more or less since his childhood.

Questioning deep learning may sound controversial, considering deep learning is seen as the most successful sub-domain in AI at the moment. Marcus on his part has been consistent in his critique. He has published work that highlights how deep learning fails, exemplified by language models such as GPT-2, Meena, and GPT-3.
Quote
Deep learning, meet knowledge graphs
When asked if he thinks knowledge graphs can have a role in the hybrid approach he advocates for, Marcus was positive. One way to think about it, he said, is that there is an enormous amount of knowledge that's represented on the Internet that's available essentially for free, and is not being leveraged by current AI systems. However, much of that knowledge is problematic:

"Most of the world's knowledge is imperfect in some way or another. But there's an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that.

Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you're a deep learning system, you can't get anything out of that at all, or hardly anything.

Wikipedia is the stuff that's on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There's all kinds of knowledge there, too. It's also being left on the floor by current approaches.

The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff.

We're going to have to get to AI systems that can use the collective human knowledge that's expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems."
(https://zdnet3.cbsistatic.com/hub/i/2017/05/01/ce8926a1-9a41-42b6-9bd0-92df1b4171f6/deeplearningiconsr5png-jpg.png)
There is more to AI than Machine Learning, and there is more to Machine Learning than deep learning. Gary Marcus is arguing for a hybrid approach to AI, reconnecting it with its roots. Image: Nvidia
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/12/2020 22:44:55
Compared to my other threads discussing the universal terminal goal, this one seems to be underdeveloped. To complement my own thought, I'll just drop some important latest research in the field of artificial intelligence, just like this one.
Inductive Biases for Deep Learning of Higher-Level Cognition
Anirudh Goyal, Yoshua Bengio
Quote
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.   
https://arxiv.org/abs/2011.15091?s=03
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/12/2020 09:22:25
Here is another great video covering current development of AI.
Timestamps for this video:
00:00 Introduction
00:45 Humanity's Next Chapter
03:38 Cathie Wood Discusses AlphaFold
08:40 Elon Musk's Dire Warning
10:20 Netflix Recommends Your Doom
14:09 Detecting Cats and Dogs
15:45 ARK's James Wang on Deep Learning
17:09 The Singularity is Near
Title: Re: How close are we from building a virtual universe?
Post by: syhprum on 07/12/2020 13:23:57
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
There is a perfectly good abbreviation to indicate that you mean to the power on all the keyboards that I have used "^" but maybe the articles are written on a pocket device that lacks this abbreviation . 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 02:45:13
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
There is a perfectly good abbreviation to indicate that you mean to the power on all the keyboards that I have used "^" but maybe the articles are written on a pocket device that lacks this abbreviation . 
Perhaps the simplest explanation is typo. The key wasn't pressed hard enough to be sensed by the keyboard. There is no autocorrect for this kind of error that I know of.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 03:01:27
This article just came into my mailbox, and I'd like to share it here since it's closely related to the topic.
Quote
NEWSLETTER ON LINKEDIN
Artificial Intelligence (AI)
 By Bernard Marr

Open this article on LinkedIn to see what people are saying about this topic. Open on LinkedIn

Future Trends And Technology – Insights from Ericsson
 
Innovation and new thought is what makes the world go round. Behind all the ground-breaking technologies such as AI and automation are human minds that are willing to push boundaries and think differently about solving problems, in both business and society.
Investing in true innovation – how to use technology to do different things, as opposed to just doing things differently – has led to sweeping changes in how we communicate, work together, play and look after our health in recent years. In particular, it has allowed businesses and organizations to get closer to their most important asset – the people who use or consume their services – than ever before. This is thanks to the ever-smarter ways in which we are capturing data and using it to overcome challenges, from understanding customer behavior to creating vaccines.
I was fortunate enough to get the chance to talk to two people who are working on this cutting-edge – Jasmeet Sethi and Cristina Pandrea, of Ericsson's ConsumerLab. This is the division within Ericsson responsible for research into current and emerging trends – with a specific focus on how they are being used in the real world today, and what that might mean for tomorrow.
During our conversation, we touched on five key trends that have been identified by the ConsumerLab, which has been collecting and analyzing data on how people interact with technology for more than 20 years. One thing they all have in common is that every one of them has come into its own during the current global pandemic. This is usually for one of two reasons – either because necessity has driven a rapid increase in the pace of adoption, or because they provide a new approach to tackling problems society is currently facing.
Let's look at each of the five trends in turn.
1. Resilient networks
In 2020, more than ever before, we've been dependant on the stability and security of IT systems and networks to keep the world running. As well as the importance of uptime and core stability when it comes to allowing businesses to switch to work-from-home models, It's been shown that cyber attacks have increased dramatically during the pandemic, meaning security is more vital than ever before.
Many of the international efforts to trace the spread of the disease, understand people's behavior in pandemic situations, and to develop vaccines and cures are dependent on the transfer of huge volumes of digital data. Ericsson believes that the amount of data transferred has increased by 40% over mobile networks and 70% over wired broadband networks since the start of the pandemic. So ensuring that infrastructure is reliable and secure has never been so important. The fact that network operators have largely been successful at this hasn't gone unnoticed, Sethi tells me – with customers thanking them with a noticeably higher level of loyalty.
2. Tele-health
Medical consultation, check-ups, examinations, and even diagnoses were increasingly being carried out remotely, even pre-covid, particularly in remote regions or areas where there is a shortage of clinical staff. However, during 2019 they made up just 19% of US healthcare contacts. Ericsson's research has shown that this increased to around 46% during 2020. This is clearly an example of a trend where the pandemic accelerated a change that was already happening. So it's likely that providers will be keen to carry on receiving the benefits they've generated, as we eventually move into a post-covid world.
Here a key challenge comes from the fact that a number of different technologies need to be working together in harmony to ensure patient care doesn't suffer, from video streaming to cloud application platforms and network security protocols. 
3. Borderless workplaces
We saw the impossible happen in 2020 as thousands of organizations mobilized to make remote working possible for their workforces in a very short period of time. But this trend goes beyond "eternal WFH" and points to a future where we have greater flexibility and freedom over where we spend our working hours. Collaborative workplace tools like Zoom and Slack meant the switchover was often relatively hassle-free, and next-generation tools will cater for a future where employees can carry out their duties from anywhere, rather than just stuck at their kitchen tables.
But this shift in social norms brings other problems, such as the danger of isolation, the difficulty between striking a balance between home and work life, or a diminished ability to build a culture within an organization. Solutions in this field look to tackle these challenges, too, rather than simply give us more ways to be connected to the office 24/7.
4. The Experience / Immersive Economy
Touching on issues raised by the previous trend, Ericsson has experimented with providing employees with virtual reality headsets, to make collaborative working more immersive. Pandrea described the benefits of this to me – "The experience was really genuine, it took us by surprise … we'd seen virtual reality before, but this was the first time where we saw 25 people in the same virtual room, having this experience … when you see the others as avatars you get the feeling of being together, it makes a world of difference."
This trend involves creating experiences that mean as little as possible is lost when you move an interaction or event from the real world to the virtual world. Virtual and augmented reality have an important role here, but Sethi points beyond this to an idea he calls the "internet of senses," where devices can feed information to us through all of our five senses. Breakthrough technologies such as the Teslasuit use haptic feedback to greatly increase the feeling of presence in virtual spaces, and is used by NASA to train astronauts. Other innovators in this field are working on including our sense of smell, by dispensing fragrances from headset attachments.
Another interesting change related to this field that's been predicted is the rise in the value put on virtual commodities and status versus material goods. Children these days are just as likely to talk boastfully about a rare Fortnite skin, Rocket League car, or Roblox pet as they would about any physical product or status symbol. "If you look at young millionaires they're already driven by virtual status – who has the best status in esports, the number of followers … this trend will be accelerated as we move into the virtual experience economy", Sethi predicts.
5. Autonomous Commerce
Two massive changes to the way we live our lives due to the pandemic have been a big acceleration in the uptake of online retail, and a move away from cash towards contactless payment methods. Cashiers were already being replaced by self-checkouts at a rapid pace pre-2020. But the pickup in speed this year brings us to a point where KFC is operating fully autonomous mobile food trucks in Shanghai. The trucks pilot themselves to customers and serve up socially-distanced meals with no human involvement.
The rush to keep up with changing consumer behavior has also sped up the adoption of cash-free and contactless retail, particularly in emerging markets where cash has traditionally been king. Financial services businesses tapping into technology like 5G networking and AI-powered fraud detection tools are responding to new expectations from customers in this field and, if they are able to predict that behavior accurately, are likely to see strong growth in coming years.
Investing in innovation
Remaining on the cutting-edge of these trends means investing strategically in new ideas and innovation. So we also talked about Ericsson's Startup 5g program, which Pandrea heads up. Here the business looks to be at the head of the pack when it comes to creating the $31 trillion in revenue that it predicts will be generated by 5G platforms and services before 2030.
Pandrea tells me that it is expected that a lot of this will come from services that telcos can bundle with their 5G offerings to help make their customers' lives better. One of the star players is XR Space, which is building a social VR platform using its own hardware that could effectively allow workers to take their office (and entertainment world) with them anywhere they go.
Another is London-based Inception XR, that enables AR experiences to be created from books to help create more immersion and gamification in children's education.
And a third that Pandrea recommends keeping an eye on for a glimpse of the future is PlaySight. It uses AI-powered 360-degree 8k cameras at sports or entertainment events, capable of capturing the action in greater detail than ever before. That data can then be delivered to an audience in any number of ways, including putting them inside VR experiences that let them view from any angle as well as pause and rewind what they are seeing.
Underlying technologies
Clearly, we can see the common threads of broader tech trends that run through these very relevant trends Ericsson is identifying today. AI technologies, as well as extended reality (XR), which includes VR, AR, and mixed reality (MR), are behind the tools that secure our networks, enable us to work efficiently from anywhere, receive remote healthcare, create immersive experiences and conduct autonomous commerce. High-speed networking is essential to every one of them too, and the quantum leap in upload and download speeds of 5G is necessary to make them all possible.
And it's certainly also true that much of the technological progress that is driving real change in business, commerce, society and entertainment has happened in response to the dark times we are living through. But as we start to cautiously look ahead to hopefully brighter days, these trends will go on to play a part in building a safer, smarter and more convenient future. 

To learn more about any of the trends we've covered here, you can watch our conversation in full here. And you can also take part in the Ericsson Unboxed virtual event that will take place on Wednesday, December 9th. Register or find out more here.

Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. I have also written a new book about AI, click here for more information. To read my future posts simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Slideshare or YouTube.
About Bernard Marr
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things.
LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his 1.5 million social media followers and shares content that reaches millions of readers.
Join the conversation



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 03:22:20
These videos explain about new technology adoption.

What's the S adoption curve mean for disruptive technology like Tesla? What about a double S curve?!
Quote
Wherein Dr. Know-it-all explains what an "S" adoption curve is, how it has functioned historically for technology like automobiles/cars, the internet, cell phones, and even smart phones. And how it matters a great deal for Tesla and other EV companies who are currently disrupting internal combustion engine (ICE) car manufacturers. Also, what happens when the EV adoption curve lines up with the full self driving (FSD) adoption curve?? Watch and find out!
Quote
By the by, as folks have pointed out, and I probably should've noted in the video itself, Tony Seba has been talking about "the tipping point" for years. While I was inspired to work up this video from a Patreon patron, and I don't closely follow Seba, I should have acknowledged that a lot of this is derived from Tony's brilliant ideas over the years. One such video is here:
Tony Seba's conclusion in the end of his video that technological disruption will happen for mainly economic reason, and not necessarily due to interference by government, is aligned with my idea that efficiency is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 04:16:36
Currently, one of the most rapid adoption of some form of virtual universe is in the field of self driving cars. These videos explained it well.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 04:21:23
Tesla's Dojo is clearly aligned with the goal stated in this thread. The next progress is clearly to generalize it so it can be applied in more kinds of problems.
An earlier effort that I've tried was Microsoft Flight Simulator. Perhaps it was used by 911 perpetrators. That's why I think that morality problem of ai users need to be solved objectively, which I discuss in another thread.
With more powerful AI, and more accurate and precise virtual universe, the user's goal can be achieved more easily, including harmful ones. A universal terminal goal is then necessary to distinguish between good and bad goals or intentions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 06:03:38
From this short video we can infer that an accurate virtual universe can increase efficiency and reduce cost.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/12/2020 06:41:00
I'd like to share a report from a software company which converges with my idea presented here. Extrapolated further, we will eventually have to deal with a universal terminal goal, which I discussed in another thread.
 
Top 8 trends shaping digital transformation in 2021
Quote
IT’s role is more critical than ever in a world that’s increasingly dependent on digital. Organizations
are under increasing pressure to stay competitive and create connected experiences. According to our
Connectivity benchmark report, IT projects are projected to grow by 40%; and 82% of businesses are now
holding their IT teams accountable for delivering connected customer experiences.
To meet these rising demands, organizations are accelerating their digital transformation — which can be
defined as the wholesale move to thinking about how to digitize every part of the business, as every part
of the business now needs technology to operate. In order to drive scale and efficiency, IT must rethink its
operating model to deliver self-serve capabilities and enable innovation across the enterprise.
In this report, we will highlight some of the top trends facing CIOs, IT leaders, and organizations in their digital
transformation journey, sourcing data from both MuleSoft proprietary research and third-party findings.
Quote
The future of automation: declarative programming
Uri Sarid,
CTO, MuleSoft
“The mounting complexity brought on by an explosion
of co-dependent systems, dynamic data, and rising
expectations demands a new approach to software. More
is expected for software to just work automatically, and
more of us expect automation of our digital life and work.
In 2021, we’ll see more and more systems be intent-based,
and see a new programming model take hold: a declarative
one. In this model, we declare an intent — a desired goal or
end state — and the software systems connected via APIs
in an application network autonomously figure out how to
simply make it so.”

Quote
2021 will be the year that data separates organizations
from their competitors... and customers
Lindsey Irvine,
CMO, MuleSoft
“The reality is that the majority of businesses today, across all industries,
aren’t able to deliver truly connected experiences for their customers,
partners, and employees — and that’s because delivering connected
experiences requires a lot of data, which lives in an average of 900
different systems and applications across the enterprise. Integrating and
unifying data across these systems is critical to create a single view of the
customer and achieve true digital transformation.
“It’s also the number one reason digital transformation initiatives fail. As
the amount of systems and applications continue to grow exponentially,
teams realize that key to their success — and their organization’s success —
is unlocking the data, wherever it exists, in a way that helps them deliver
value faster.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2020 03:09:19
Building an accurate and precise virtual universe requires a sound and robust scientific method.

https://physicsworld.com/a/madness-in-the-method-why-your-notions-of-how-science-works-are-probably-wrong/

Quote
You know what the scientific method is until you try to define it: it’s a set of rules that scientists adopt to obtain a special kind of knowledge. The list is orderly, teachable and straightforward, at least in principle. But once you start spelling out the rules, you realize that they really don’t capture how scientists work, which is a lot messier. In fact, the rules exclude much of what you’d call science, and includes even more of what you don’t. You even begin to wonder why anyone thought it necessary to specify a “scientific method” at all.

In his new book The Scientific Method: an Evolution of Thinking from Darwin to Dewey, the University of Michigan historian Henry Cowles explains why some people thought it necessary to define “scientific method” in the first place. Once upon a time, he writes, science meant something like knowledge itself – the facts we discover about the world rather than the sometimes unruly way we got them. Over time, however, science came to mean a particular stepwise way that we obtain those facts independent of the humans who follow the method, and independent of the facts themselves.
Quote
Just as nature takes alternative forms of life and selects among them, Darwin argued, so scientists take hypotheses and choose the most robust. Nature has its own “method”, and humans acquire knowledge in an analogous way. Darwin’s scientific work on living creatures is indeed rigorous, as I think contemporary readers will agree, but in the lens of our notions of scientific method it was hopelessly anecdotal, psychological and disorganized. He was, after all, less focused on justifying his beliefs than on understanding nature.
Quote
Following Darwin, the American “pragmatists” – 19th-century philosophers such as Charles Peirce and William James – developed more refined accounts of the scientific method that meshed with their philosophical concerns. For Peirce and James, beliefs were not mental judgements or acts of faith, but habits that individuals develop through long experience. Beliefs are principles of action that are constantly tested against the world, reshaped and tested again, in an endless process. The scientific method is simply a careful characterization of this process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/12/2020 12:52:41
Here is another video describing latest progress in building a virtual universe. This time it's abot microscopic universe, but extremely important for living organisms.
Quote
This is Biology's AlexNet moment! DeepMind solves a 50-year old problem in Protein Folding Prediction. AlphaFold 2 improves over DeepMind's 2018 AlphaFold system with a new architecture and massively outperforms all competition. In this Video, we take a look at how AlphaFold 1 works and what we can gather about AlphaFold 2 from the little information that's out there.

OUTLINE:
0:00 - Intro & Overview
3:10 - Proteins & Protein Folding
14:20 - AlphaFold 1 Overview
18:20 - Optimizing a differentiable geometric model at inference
25:40 - Learning the Spatial Graph Distance Matrix
31:20 - Multiple Sequence Alignment of Evolutionarily Similar Sequences
39:40 - Distance Matrix Output Results
43:45 - Guessing AlphaFold 2 (it's Transformers)
53:30 - Conclusion & Comments

AlphaFold 2 Blog: https://deepmind.com/blog/article/alp...
AlphaFold 1 Blog: https://deepmind.com/blog/article/Alp...
AlphaFold 1 Paper: https://www.nature.com/articles/s4158...
MSA Reference: https://arxiv.org/abs/1211.1281
CASP14 Challenge: https://predictioncenter.org/casp14/i...
CASP14 Result Bar Chart: https://www.predictioncenter.org/casp...

Paper Title: High Accuracy Protein Structure Prediction Using Deep Learning

Abstract:
Proteins are essential to life, supporting practically all its functions. They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). This breakthrough demonstrates the impact AI can have on scientific discovery and its potential to dramatically accelerate progress in some of the most fundamental fields that explain and shape our world.

Authors: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Kathryn Tunyasuvunakool, Olaf Ronneberger, Russ Bates, Augustin Žídek, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Anna Potapenko, Andrew J Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Martin Steinegger, Michalina Pacholska, David Silver, Oriol Vinyals, Andrew W Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2020 08:30:14
Here is the newest progress toward generalization of artifiicial intelligence be DeepMind.
https://www.nature.com/articles/s41586-020-03051-4
Mastering Atari, Go, chess and shogi by planning with a learned model
Quote
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess1 and Go2, where a perfect simulator is available. However, in real-world problems, the dynamics governing the environment are often complex and unknown. Here we present the MuZero algorithm, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. The MuZero algorithm learns an iterable model that produces predictions relevant to planning: the action-selection policy, the value function and the reward. When evaluated on 57 different Atari games3—the canonical video game environment for testing artificial intelligence techniques, in which model-based planning approaches have historically struggled4—the MuZero algorithm achieved state-of-the-art performance. When evaluated on Go, chess and shogi—canonical environments for high-performance planning—the MuZero algorithm matched, without any knowledge of the game dynamics, the superhuman performance of the AlphaZero algorithm5 that was supplied with the rules of the game.
Quote
MuZero is trained only on data generated by MuZero itself; no external data were used to produce the results presented in the article. Data for all figures and tables presented are available in JSON format in the Supplementary Information.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2020 09:22:45
The Great Google Crash: The World’s Dependency Revealed

We long for the day when nobody runs anything.
Todd Underwood - Google SRE
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/12/2020 03:56:49
Sooner or later, people will realize that we are on progress of building an accurate virtual universe. Unless of course, if we got extinct beforehand.
https://twitter.com/elonmusk/status/1343002225916841985?s=03
Quote
Vaccines are just the start. It's also capable in theory of curing almost anything. Turns medicine into a software & simulation problem.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/12/2020 04:02:56
Here is the article Elon Musk was tweeting about.
https://berthub.eu/articles/posts/reverse-engineering-source-code-of-the-biontech-pfizer-vaccine/
Quote
Welcome! In this post, we’ll be taking a character-by-character look at the source code of the BioNTech/Pfizer SARS-CoV-2 mRNA vaccine.
Now, these words may be somewhat jarring - the vaccine is a liquid that gets injected in your arm. How can we talk about source code?

This is a good question, so let’s start off with a small part of the very source code of the BioNTech/Pfizer vaccine, also known as BNT162b2, also known as Tozinameran also known as Comirnaty.

(https://berthub.eu/articles/bnt162b2.png)
First 500 characters of the BNT162b2 mRNA. Source: World Health Organization

The BNT162b mRNA vaccine has this digital code at its heart. It is 4284 characters long, so it would fit in a bunch of tweets. At the very beginning of the vaccine production process, someone uploaded this code to a DNA printer (yes), which then converted the bytes on disk to actual DNA molecules.
(https://berthub.eu/articles/bioxp-3200.jpg)
A Codex DNA BioXp 3200 DNA printer

Out of such a machine come tiny amounts of DNA, which after a lot of biological and chemical processing end up as RNA (more about which later) in the vaccine vial. A 30 microgram dose turns out to actually contain 30 micrograms of RNA. In addition, there is a clever lipid (fatty) packaging system that gets the mRNA into our cells.

RNA is the volatile ‘working memory’ version of DNA. DNA is like the flash drive storage of biology. DNA is very durable, internally redundant and very reliable. But much like computers do not execute code directly from a flash drive, before something happens, code gets copied to a faster, more versatile yet far more fragile system.

For computers, this is RAM, for biology it is RNA. The resemblance is striking. Unlike flash memory, RAM degrades very quickly unless lovingly tended to. The reason the Pfizer/BioNTech mRNA vaccine must be stored in the deepest of deep freezers is the same: RNA is a fragile flower.

Each RNA character weighs on the order of 0.53·10⁻²¹ grams, meaning there are 6·10¹⁶ characters in a single 30 microgram vaccine dose. Expressed in bytes, this is around 25 petabytes, although it must be said this consists of around 2000 billion repetitions of the same 4284 characters. The actual informational content of the vaccine is just over a kilobyte. SARS-CoV-2 itself weighs in at around 7.5 kilobytes.
And the summary is below.
Quote
Summarising
With this, we now know the exact mRNA contents of the BNT162b2 vaccine, and for most parts we understand why they are there:

- The CAP to make sure the RNA looks like regular mRNA
- A known successful and optimized 5’ untranslated region (UTR)
- A codon optimized signal peptide to send the Spike protein to the right place (copied 100% from the original virus)
- A codon optimized version of the original spike, with two ‘Proline’ substitutions to make sure the protein appears in the right form
- A known successful and optimized 3’ untranslated region
- A slightly mysterious poly-A tail with an unexplained ‘linker’ in there

The codon optimization adds a lot of G and C to the mRNA. Meanwhile, using Ψ (1-methyl-3’-pseudouridylyl) instead of U helps evade our immune system, so the mRNA stays around long enough so we can actually help train the immune system.
You can read the detail in the link above, which is fascinating.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/01/2021 10:37:19
Quote
In this video Elon Musk talks about Tesla Full Self driving software remotely at a Chinese AI conference. Elon predicts that Tesla will achieve level 5 autonomy soon and sooner than people can imagine. Elon also indirectly criticizes Waymo, a googles self-driving software company. Waymo depends on LiDAR and HD maps. Most of the time, they train their self-driving software and car in simulation.

In this video he emphasizes that understanding reality is essentially a data compression process. I've mentioned this previously in this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/01/2021 02:53:06
Here are very informative videos explaining how Tesla autopilot was developed.



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/01/2021 03:30:15
Some main points I get from the videos are:
- Autopilot builds a virtual universe in its memory space to represent its surrounding environment based on data input from its sensors.
- Modular concepts are employed to increase efficiency, so many things don't have to start from scratch again everytime new feature is added.
- Building the virtual universe is done in real time which means a lot of new data is acquired, hence a lot of older data must be discarded. Therefore, to make the system work, it must compress the incoming data into meaningful and useful concepts, after filtering out noises and insignificant information.
- Those data selection requires data hierarchy like deep believe network I mentioned earlier. Higher level information (believe) determine which data from lower level believe nodes to be kept and used or discarded and ignored. It's similar to how human brain works. That's why sometimes we find it hard to convince people by simply presenting facts that contradict their existing believe system, such as flat earthers, MAGA crowd, or religious fanatics.
- The automation process is kept being automated, up into several levels of automation. We are building machines that build machines that build machines, and so on, as Ray Kurzweil called indirection. And those machines are getting better at achieveing their goals put into them. That's why it's getting more urgent for us to find a universal terminal goal, as I discuss in another thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/01/2021 05:59:48
Quote
Wherein Dr. Know-it-all discusses the work of Dr. Arthur Choi (UCLA) and others concerning the quest to understand how deep convolutional neural networks function. This new field, XAI, or explainable AI, uses decision trees, formal logic, and even tractable boolean circuits (simulated logic gates) to explain why machine learning using deep neural nets functions so well some of the time, but so poorly other times.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2021 22:32:59
Last year may have severed our connections with the physical world, but in the digital realm, AI thrived. Take NeurIps, the crown jewel of AI conferences. While lacking the usual backdrop of the dazzling mountains of British Columbia or the beaches of Barcelona, the annual AI extravaganza highlighted a slew of “big picture” problems—bias, robustness, generalization—that will encompass the field for years to come.

On the nerdier side, scientists further explored the intersection between AI and our own bodies. Core concepts in deep learning, such as backpropagation, were considered a plausible means by which our brains “assign fault” in biological networks—allowing the brain to learn. Others argued it’s high time to double-team intelligence, combining the reigning AI “golden child” method—deep learning—with other methods, such as those that guide efficient search.

Here are four areas we’re keeping our eyes on in 2021. They touch upon outstanding AI problems, such as reducing energy consumption, nixing the need for exuberant learning examples, and teaching AI some good ole’ common sense.

https://singularityhub.com/2021/01/05/2021-could-be-a-banner-year-for-ai-if-we-solve-these-4-problems/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2021 03:27:21
https://towardsdatascience.com/introduction-to-bayesian-inference-18e55311a261

Motivation
Imagine the following scenario: you are driving an ambulance to a hospital and have to decide between route A and B. In order to save your patient, you need to arrive in less than 15 minutes. If we estimate that route A takes 12 minutes and route B takes 10 minutes, which would you choose? Route B seems faster, so why not?
The information provided so far consisted of point estimates of routes A and B. Now, let’s add information about the uncertainty of each prediction: route A takes 12 min ±1min, while route B takes 10 min ±6min.
Now it seems like the prediction of route B is significantly more uncertain, eventually risking taking longer than the 15 minute limit. Adding information about uncertainty here can make us change our decision from taking route B to taking route A.

More broadly, consider the following cases:
We want to estimate a quantity which does not have a fixed value — instead, it can change between different ones
Regardless of the true value being fixed or not, we are interested in knowing the uncertainty of our estimation
The ambulance example was intended to illustrate the second case. For the first case, we can have a quick look at the work of Nobel Prize winning economist Christopher Sims. I will simply cite his student Toshiaki Watanabe:
I once asked Chris why he favoured the Bayesian approach. He replied by pointing to the Lucas critique, which argues that when government and central bank policies change, so do the model parameters, so that they should be regarded not as constants but as stochastic variables.
For both cases, Bayesian inference can be used to model our variables of interest as a whole distribution, instead of a unique value or point estimate.

Judea Pearl describes it this way, in The Book of Why [2]:
(…) Bayes’s rule is formally an elementary consequence of his definition of conditional probability. But epistemologically, it is far from elementary. It acts, in fact, as a normative rule for updating beliefs in response to evidence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2021 07:45:31
“Our approach is pretty much the exact opposite of the traditional pharmaceutical approach. With our approach, there is no drug, no poison at all – just a little program written in DNA. We’ve effectively taken targeting out of the realm of chemistry and brought it into the realm of information.”
Matthew Scholz, Co-founder & CEO, Oisín Biotechnologies

https://www.longevity.technology/promising-restorative-therapy-could-potentially-be-available-within-5-years/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/01/2021 06:57:51
The Coronavirus Is Mutating. Here’s What We Know | WSJ

Another example how an accurate virtual universe can help to accelerate research through trial and error by saving required resources, especially time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2021 07:17:28
The Promise (And Realities) Of AI / ML
https://energycentral.com/c/iu/promise-and-realities-ai-ml
Quote
Artificial Intelligence has been getting a bad rap of late, with numerous opinion pieces and articles describing how it has struggled to live up to the hype. Arguments have centered around computational cost, lack of high-quality data, and the difficulty in getting past the high nineties in percent accuracy, all resulting in the continued need to have humans in the loop.
Quote
AI & ML are simply tools for building complex (and sometimes non-linear) models that consider large amounts of information. They are most potent in applications where their pattern finding power significantly exceeds human capability. If we adjust our attitude and expectations, we can leverage their power to bring about all sorts of tangible outcomes for humanity.

With this type of re-calibration, our mission should be to use AI to help human decision makers, rather than replace them. Machine learning is now being used to build weather and climate impact models that help infrastructure managers respond with accuracy and allocate their resources efficiently. While these models do not perfectly match the ground truth, they are much more accurate and precise than simple heuristics, and can save millions of dollars through more efficient capital allocation.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2021 07:26:05
https://spectrum.ieee.org/computing/software/its-too-easy-to-hide-bias-in-deeplearning-systems
Artificial intelligence makes it hard to tell when decision-making is biased
Quote
When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. “You can select people who are interested in football, and they live in Cote d’Azur, and they were at this college, and they also like drinking,” Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that’s because Facebook doesn’t want to appear creepy; the company declined to comment for this article, so it’s hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users’ suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what’s being revealed is why the algorithm chose a particular ad to show you.

The world around us is increasingly choreographed by such algorithms. They decide what advertisements, news, and movie recommendations you see. They also help to make far more weighty decisions, determining who gets loans, jobs, or parole. And in the not-too-distant future, they may decide what medical treatment you’ll receive or how your car will navigate the streets. People want explanations for those decisions. Transparency allows developers to debug their software, end users to trust it, and regulators to make sure it’s safe and fair.

The problem is that these automated systems are becoming so frighteningly complex that it’s often very difficult to figure out why they make certain decisions. So researchers have developed algorithms for understanding these decision-making automatons, forming the new subfield of explainable AI.
Quote
In 2017, the Defense Advanced Research Projects Agency launched a US $75 million XAI project. Since then, new laws have sprung up requiring such transparency, most notably Europe’s General Data Protection Regulation, which stipulates that when organizations use personal data for “automated decision-making, including profiling,” they must disclose “meaningful information about the logic involved.” One motivation for such rules is a concern that black-box systems may be hiding evidence of illegal, or perhaps just unsavory, discriminatory practices.
Quote
As a result, XAI systems are much in demand. And better policing of decision-making algorithms would certainly be a good thing. But even if explanations are widely required, some researchers worry that systems for automated decision-making may appear to be fair when they really aren’t fair at all.

For example, a system that judges loan applications might tell you that it based its decision on your income and age, when in fact it was your race that mattered most. Such bias might arise because it reflects correlations in the data that was used to train the AI, but it must be excluded from decision-making algorithms lest they act to perpetuate unfair practices of the past.

The challenge is how to root out such unfair forms of discrimination. While it’s easy to exclude information about an applicant’s race or gender or religion, that’s often not enough. Research has shown, for example, that job applicants with names that are common among African Americans receive fewer callbacks, even when they possess the same qualifications as someone else.

A computerized résumé-screening tool might well exhibit the same kind of racial bias, even if applicants were never presented with checkboxes for race. The system may still be racially biased; it just won’t “admit” to how it really works, and will instead provide an explanation that’s more palatable.

Regardless of whether the algorithm explicitly uses protected characteristics such as race, explanations can be specifically engineered to hide problematic forms of discrimination. Some AI researchers describe this kind of duplicity as a form of “fairwashing”: presenting a possibly unfair algorithm as being fair.

 Whether deceptive systems of this kind are common or rare is unclear. They could be out there already but well hidden, or maybe the incentive for using them just isn’t great enough. No one really knows. What’s apparent, though, is that the application of more and more sophisticated forms of AI is going to make it increasingly hard to identify such threats.
Quote
No company would want to be perceived as perpetuating antiquated thinking or deep-rooted societal injustices. So a company might hesitate to share exactly how its decision-making algorithm works to avoid being accused of unjust discrimination. Companies might also hesitate to provide explanations for decisions rendered because that information would make it easier for outsiders to reverse engineer their proprietary systems. Cynthia Rudin, a computer scientist at Duke University, in Durham, N.C., who studies interpretable machine learning, says that the “explanations for credit scores are ridiculously unsatisfactory.” She believes that credit-rating agencies obscure their rationales intentionally. “They’re not going to tell you exactly how they compute that thing. That’s their secret sauce, right?”

And there’s another reason to be cagey. Once people have reverse engineered your decision-making system, they can more easily game it. Indeed, a huge industry called “search engine optimization” has been built around doing just that: altering Web pages superficially so that they rise to the top of search rankings.
What I see from the trend is that information technology is converging toward the building of a virtual universe. Competitions to become the first/biggest/best AI system builder for selfish motivations could be directed to a more collaborative efforts by promoting a universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/01/2021 11:52:07
'Liquid' machine-learning system adapts to changing conditions

Quote
MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed "liquid" networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.   
https://techxplore.com/news/2021-01-liquid-machine-learning-conditions.amp?__twitter_impression=true

Quote
  Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of "training" examples. They're often said to mimic the processing pathways of the brain—Hasani drew inspiration directly from the microscopic nematode, C. elegans. "It only has 302 neurons in its nervous system," he says, "yet it can generate unexpectedly complex dynamics."

Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations. 

In the future, we will have AI that keeps learning from real world experience, not just in training phase. They are getting more humanlike.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/01/2021 12:55:51
Quote
   Risky behaviors such as smoking, alcohol and drug use, speeding, or frequently changing sexual partners result in enormous health and economic consequences and lead to associated costs of an estimated 600 billion dollars a year in the US alone. In order to define measures that could reduce these costs, a better understanding of the basis and mechanisms of risk-taking is needed.
Quote
Specific characteristics were found in several areas of the brain: In the hypothalamus, where the release of hormones (such as orexin, oxytocin and dopamine) controls the vegetative functions of the body; in the hippocampus, which is essential for storing memories; in the dorsolateral prefrontal cortex, which plays an important role in self-control and cognitive deliberation; in the amygdala, which controls, among other things, the emotional reaction to danger; and in the ventral striatum, which is activated when processing rewards.   
Quote
  The researchers were surprised by the measurable anatomical differences they discovered in the cerebellum, an area that is not usually included in studies of risk behaviors on the assumption that it is mainly involved in fine motor functions. In recent years, however, significant doubts have been raised about this hypothesis – doubts which are now backed by the current study. 
Quote
  “It appears that the cerebellum does after all play an important role in decision-making processes such as risk-taking behavior,” confirms Aydogan. “In the brains of more risk-tolerant individuals, we found less gray matter in these areas. How this gray matter affects behavior, however, still needs to be studied further.” 
https://neurosciencenews.com/brain-risky-behavior-17633/

Risk taking is an important factor in decision making, which we need to deeply understand so it can be simulated in a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 15:28:03
Someone has come into similar conclusion as I posted here. This GME 'riot' should be a wake up call.
Quote
  Joscha Bach (@Plinz) tweeted at 11:20 AM on Fri, Jan 29, 2021:
In the long run, machine learning and a publicly accessible stock market cannot coexist
(https://twitter.com/Plinz/status/1355007909281718274?s=03) 

Quote
  Joscha Bach (@Plinz) tweeted at 7:54 PM on Fri, Jan 29, 2021:
The financial system is software executed by humans, full of holes and imperfections, and very hard to update and maintain. Using substantial computational resources to discover and exploit its imperfections will eventually nuke it into oblivion
(https://twitter.com/Plinz/status/1355137134789681158?s=03) 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 23:37:13
A financial system should be a tool to redistribute resources to optimally achieving common goals of the society. It's akin to circulatory system in multicellular organisms.
While current financial system enables innovators to thrive by convincing people to contribute to their inventions, and gain profit from them, it also enables other financial actors to gamble with someone else's money. They get profits when they win, but then get away or bailed out when they lose.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 23:44:00
A free market supposed to be a self organizing system. But if some parts of the system aggregate and accumulate enough power to manipulate or bypass self regulatory functions, they can accumulate more resources for themselves while depriving and sacrificing others, making the entire structure to collapse. It's akin to behavior of cancerous cells.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/01/2021 03:28:17
https://www.engadget.com/autox-fully-driverless-robotaxi-china-145126521.html
Quote
  Driverless robotaxis are now available for public rides in China
AutoX is the first in the country to offer rides without safety drivers. 
Quote
  After lots of tests, it’s now possible to hail a truly driverless robotaxi in China. AutoX has become the first in the country to offer public rides in autonomous vehicles without safety drivers. You’ll need to sign up for a pilot program in Shenzhen and use membership credits, but after that you can hop in a modified Chrysler Pacifica to travel across town without seeing another human being. 

Quote
  Fully driverless robotaxis are still very rare anywhere in the world, and it’ll take a combination of refined technology and updated regulation before they’re relatively commonplace. This is an important step in that direction, though. They might get a boost in the current climate, though. The COVID-19 pandemic has added risk to conventional ride hailing for both drivers and passengers, and removing drivers could make this one of the safest travel options for people without cars of their own. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/01/2021 05:56:09
https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/

Quote
  In the blog post where it declared the GPT-3 API, OpenAI stated three key reasons for not open-sourcing the deep learning model. The first was, obviously, to cover the costs of their ongoing research. Second, but equally important, is running GPT-3 requires vast compute resources that many companies don’t have. Third (which I won’t get into in this post) is to prevent misuse and harmful applications.

Based on this information, we know that to make GPT-3 profitable, OpenAI will need to break even on the costs of research and development, and also find a business model that turns in profits on the expenses of running the model. 

Quote
  In general, machine learning algorithms can perform a single, narrowly defined task. This is especially true for natural language processing, which is much more complicated than other fields of artificial intelligence. To repurpose a machine learning model for a new task, you must retrain it from scratch or fine-tune it with new examples, a process known as transfer learning.

But contrary to other machine learning models, GPT-3 is capable of zero-shot learning, which means it can perform many new tasks without the need for new training. For many other tasks, it can perform one-shot learning: Give it one example and it will be able to expand to other similar tasks. Theoretically, this makes it ideal as a general-purpose AI technology that can support many new applications.

Significant portion of their research budget goes to the stellar salaries OpenAI has to pay the highly coveted AI talent it has hired for the task. I wonder how long it would take for the AGI to surpass the capability of its own creators, so human AI talents are no longer needed. It looks like they are facing a dilemma. If they don't do it, their competitors are ready to surpass them, which would make their past and current efforts meaningless.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/02/2021 11:21:29
https://venturebeat.com/2021/01/28/ai-holds-the-key-to-even-better-ai/
Quote
  For all the talk about how artificial intelligence technology is transforming entire industries, the reality is that most businesses struggle to obtain real value from AI. 65% of organizations that have invested in AI in recent years haven’t yet seen any tangible gains from those investments, according to a 2019 survey conducted by MIT Sloan Management Review and the Boston Consulting Group. And a quarter of businesses implementing AI projects see at least 50% of those projects fail, with “lack of skilled staff” and “unrealistic expectations” among the top reasons for failure, per research from IDC. 
Quote
  Encouragingly, AI is already being leveraged to simplify other tech-related tasks, like writing and reviewing code (which itself is built by AI). The next phase of the deep learning revolution will involve similar complementary tools. Over the next five years, expect to see such capabilities slowly become available commercially to the public. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/02/2021 06:35:31
https://www.linkedin.com/pulse/fake-news-rampant-here-how-artificial-intelligence-ai-bernard-marr/
Quote
One of the latest collaborations between artificial intelligence and humans is further evidence of how machines and humans can create better results when working together. Artificial intelligence (AI) is now on the job to combat the spread of misinformation on the internet and social platforms thanks to the efforts of start-ups such as Logically. While AI is able to analyze the enormous amounts of info generated daily on a scale that's impossible for humans, ultimately, humans need to be part of the process of fact-checking to ensure credibility. As Lyric Jain, founder and CEO of Logically, said, toxic news travels faster than the truth. Our world desperately needs a way to discern truth from fiction in our news and public, political and economic discussions, and artificial intelligence will help us do that.
Quote
The Fake News “Infodemic”

People are inundated with info every single day. Each minute, there are 98,000 tweets, 160 million emails sent, and 600 videos uploaded to YouTube. Politicians. Marketers. News outlets. Plus, there are countless individuals spewing their opinions since self-publishing is so easy. People crave a way to sort through all the information to find valuable nuggets they can use in their own life. They want facts, and companies are starting to respond often by using machine learning and AI tools.
Quote
As the pursuit of fighting fake news becomes more sophisticated, technology leaders will continue to work to find even better ways to sort out fact from fiction also well as refine the AI tools that can help fight disinformation. Deep learning can help automate some of the steps in fake news detection, according to a team of researchers at DarwinAI and Canada's University of Waterloo. They are segmenting fact-checking into various sub-tasks, including stance detection where the system is given a claim on a news story plus other stories on the same subject to determine if those other stories support or refute the claim in the original piece.
As long as we believe that there's an objective reality, we will need reliable information sources which reflect it accurately, or at least are consistent with each other. This trend seems to keep getting us closer to a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/02/2021 06:40:03
This is why we need the ability to distinguish between objective reality vs alternative reality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/02/2021 12:33:13
To simulate the universe, it is necessary to simulate consciousness as well, and we need to understand it first.

A new theory of brain organization takes aim at the mystery of consciousness

https://neurosciencenews.com/brain-organization-consciousness-15132/
Quote
Consciousness is one of the brain’s most enigmatic mysteries. A new theory, inspired by thermodynamics, takes a high-level perspective of how neural networks in the brain transiently organize to give rise to memories, thought and consciousness.

The key to awareness is the ebb and flow of energy: when neurons functionally tag together to support information processing, their activity patterns synchronize like ocean waves. This process is inherently guided by thermodynamic principles, which — like an invisible hand — promotes neural connections that favors conscious awareness. Disruptions in this process breaks down communication between neural networks, giving rise to neurological disorders such as epilepsy, autism or schizophrenia.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/02/2021 10:59:28
https://www.quantamagazine.org/brains-background-noise-may-hold-clues-to-persistent-mysteries-20210208/

Quote

Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries

NEUROSCIENCE
Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries
By
ELIZABETH LANDAU
February 8, 2021

By digging out signals hidden within the brain’s electrical chatter, scientists are getting new insights into sleep, aging and more.

An illustration of a human brain against “pink noise” static.
Olena Shmahalo/Quanta Magazine; noise generated by Thomas Donoghue
At a sleep research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.

Lendner argued, though, that the answer isn’t in the regular brain waves, but rather in an aspect of neural activity that scientists might normally ignore: the erratic background noise.

Some researchers seemed incredulous. “They said, ‘So, you’re telling me that there’s, like, information in the noise?’” said Lendner, an anesthesiology resident at the University Medical Center in Tübingen, Germany, who recently completed a postdoc at the University of California, Berkeley. “I said, ‘Yes. Someone’s noise is another one’s signal.’
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/02/2021 12:53:10
Mind Reading For Brain-To-Text Communication!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 05:08:57
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
The learning algorithm that enables the runaway success of deep neural networks doesn’t work in biological brains, but researchers are finding alternatives that could.

Quote
Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.

But real brains are highly unlikely to be relying on the same algorithm. It’s not just that “brains are able to generalize and learn better and faster than the state-of-the-art AI systems,” said Yoshua Bengio, a computer scientist at the University of Montreal, the scientific director of the Quebec Artificial Intelligence Institute and one of the organizers of the 2007 workshop. For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Simulating-a-Neuron.svg)
Quote
However, it was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.

The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.
To understand this process, think of a “loss function” that describes the difference between the inferred and desired outputs as a landscape of hills and valleys. When a network makes an inference with a given set of synaptic weights, it ends up at some location on the loss landscape. To learn, it needs to move down the slope, or gradient, toward some valley, where the loss is minimized to the extent possible. Backpropagation is a method for updating the synaptic weights to descend that gradient.

In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance. This calculation proceeds sequentially backward from the output layer to the input layer, hence the name backpropagation. Do this over and over for sets of inputs and desired outputs, and you’ll eventually arrive at an acceptable set of weights for the entire neural network.
Quote
Impossible for the Brain
The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”

Backprop is considered biologically implausible for several major reasons. The first is that while computers can easily implement the algorithm in two phases, doing so for biological neural networks is not trivial. The second is what computational neuroscientists call the weight transport problem: The backprop algorithm copies or “transports” information about all the synaptic weights involved in an inference and updates those weights for more accuracy. But in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output. From a neuron’s point of view, “it’s OK to know your own synaptic weights,” said Yamins. “What’s not okay is for you to know some other neuron’s set of synaptic weights.”

(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Backpropagation.svg)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 05:28:05
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
Predicting Perceptions
The constraint that neurons can learn only by reacting to their local environment also finds expression in new theories of how the brain perceives. Beren Millidge, a doctoral student at the University of Edinburgh and a visiting fellow at the University of Sussex, and his colleagues have been reconciling this new view of perception — called predictive coding — with the requirements of backpropagation. “Predictive coding, if it’s set up in a certain way, will give you a biologically plausible learning rule,” said Millidge.

Predictive coding posits that the brain is constantly making predictions about the causes of sensory inputs. The process involves hierarchical layers of neural processing. To produce a certain output, each layer has to predict the neural activity of the layer below. If the highest layer expects to see a face, it predicts the activity of the layer below that can justify this perception. The layer below makes similar predictions about what to expect from the one beneath it, and so on. The lowest layer makes predictions about actual sensory input — say, the photons falling on the retina. In this way, predictions flow from the higher layers down to the lower layers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 09:05:16
Quote
Pyramidal Neurons
Some scientists have taken on the nitty-gritty task of building backprop-like models based on the known properties of individual neurons. Standard neurons have dendrites that collect information from the axons of other neurons. The dendrites transmit signals to the neuron’s cell body, where the signals are integrated. That may or may not result in a spike, or action potential, going out on the neuron’s axon to the dendrites of post-synaptic neurons.

But not all neurons have exactly this structure. In particular, pyramidal neurons — the most abundant type of neuron in the cortex — are distinctly different. Pyramidal neurons have a treelike structure with two distinct sets of dendrites. The trunk reaches up and branches into what are called apical dendrites. The root reaches down and branches into basal dendrites.
(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Neurons.svg)
Quote
Models developed independently by Kording in 2001, and more recently by Blake Richards of McGill University and the Quebec Artificial Intelligence Institute and his colleagues, have shown that pyramidal neurons could form the basic units of a deep learning network by doing both forward and backward computations simultaneously. The key is in the separation of the signals entering the neuron for forward-going inference and for backward-flowing errors, which could be handled in the model by the basal and apical dendrites, respectively. Information for both signals can be encoded in the spikes of electrical activity that the neuron sends down its axon as an output.

In the latest work from Richards’ team, “we’ve gotten to the point where we can show that, using fairly realistic simulations of neurons, you can train networks of pyramidal neurons to do various tasks,” said Richards. “And then using slightly more abstract versions of these models, we can get networks of pyramidal neurons to learn the sort of difficult tasks that people do in machine learning.”
There are so much information densely packed into a single article. I found it hard to compress it any further.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 09:18:48
Quote
The Role of Attention
An implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons. But “there is no teacher in the brain that tells every neuron in the motor cortex, ‘You should be switched on and you should be switched off,’” said Pieter Roelfsema of the Netherlands Institute for Neuroscience in Amsterdam.
Quote
Roelfsema thinks the brain’s solution to the problem is in the process of attention. In the late 1990s, he and his colleagues showed that when monkeys fix their gaze on an object, neurons that represent that object in the cortex become more active. The monkey’s act of focusing its attention produces a feedback signal for the responsible neurons. “It is a highly selective feedback signal,” said Roelfsema. “It’s not an error signal. It is just saying to all those neurons: You’re going to be held responsible [for an action].”

Roelfsema’s insight was that this feedback signal could enable backprop-like learning when combined with processes revealed in certain other neuroscientific findings. For example, Wolfram Schultz of the University of Cambridge and others have shown that when animals perform an action that yields better results than expected, the brain’s dopamine system is activated. “It floods the whole brain with neural modulators,” said Roelfsema. The dopamine levels act like a global reinforcement signal.

In theory, the attentional feedback signal could prime only those neurons responsible for an action to respond to the global reinforcement signal by updating their synaptic weights, said Roelfsema. He and his colleagues have used this idea to build a deep neural network and study its mathematical properties. “It turns out you get error backpropagation. You get basically the same equation,” he said. “But now it became biologically plausible.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/03/2021 08:05:07
Imagine how much you can gain just from the stock market, if you have clear insight of what would happen in the future.
This video was from 2010.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2021 11:49:00
Taming Transformers for High-Resolution Image Synthesis

It seems like we are getting better at building information processors comparable to human brains.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2021 12:25:39
In not so distant future, most information available online will be generated by AI.

That prediction will force us to build a virtual universe which is intended to accurately represent objective reality. Otherwise, there will be no way to distinguish facts from fictions, especially for something which are not widely known already.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/03/2021 07:31:14
Has Google Search changed much since 1998?
This video shows how Google has evolved to getting closer to building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/03/2021 08:58:40
Tesla's Autopilot, Full Self Driving, Neural Networks & Dojo
Quote
In this video I react to a discussion from the Lex Fridman podcast with legendary chip designer Jim Keller (ex-Tesla) sharing their thoughts on computer vision, neural networks, Tesla's autopilot and full self driving software (and hardware), autonomous vehicles, deep learning and Tesla Dojo (Tesla's dojo is a training system).
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/03/2021 12:08:20
More reason to replace the lawmakers with AI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/03/2021 02:44:55
The Most Advanced Digital Government in the World
Quote
A small European country is leading the world in establishing an “e-government” for its citizens.

Estonia's fully online, e-government system has been revolutionary for the country's citizens, making tasks like voting, filing taxes, and renewing a driver’s license quick and convenient.

In operation since 2001, “e-Estonia” is now a well-oiled, digital machine. Estonia was the first country to hold a nationwide election online, and ministers dictate decisions via an e-Cabinet.

Estonia was also the first country to declare internet access a human right. 99% of public services are available digitally 24/7, excluding only marriage, divorce, and real-estate transactions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/03/2021 22:43:41
https://www.nextplatform.com/2021/03/11/its-time-to-start-paying-attention-to-vector-databases/amp/

Quote
The concepts underpinning vector databases are decades old, but it is only relatively recently that these are the underlying “secret weapon” of the largest webscale companies that provide services like search and near real-time recommendations.

Like all good clandestine competitive tools, the vector databases that support these large companies are all purpose-built in-house, optimized for the types of similarity search operations native to their business (content, physical products, etc.).

These custom-tailored vector databases are the “unsung hero of big machine learning,” says Edo Liberty, who built tools like this at Yahoo Research during its scalable machine learning platform journey. He carried some of this over to AWS, where he ran Amazon AI labs and helped cobble together standards like AWS Sagemaker, all the while learning how vector databases could integrate with other platforms and connect with the cloud.

“Vector databases are a core piece of infrastructure that fuels every big machine learning deployment in industry. There was never a way to do this directly, everyone just had to build their own in-house,” he tells The Next Platform. The funny thing is, he was working on high dimensional geometry during his PhD days; the AI/ML renaissance just happened to perfectly intersect with exactly that type of work.

“In ML, suddenly everything was being represented as these high-dimensional vectors, that quickly became a huge source of data, so it you want to search, rank or give recommendations, the object in your actual database wasn’t a document or an image—it was this mathematical representation of the machine learning model.” In short, this quickly became important for a lot of companies.
I think that the virtual universe would be built upon vector database foundation at its core system. This assessment is based on my experience in some system migration projects, which pushed me to reverse engineer a system database to make a tool to accelerate the process by automating some tasks.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2021 09:58:17
Quote
The Senate filibuster is one of the biggest things standing in the way of anti-voter suppression laws, raising the minimum wage and immigration reform. What is this loophole, and how does it affect governing today?
Lawmaking process obviously needs to get more efficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2021 21:36:28
ISO standard basically said that you've got to document everything and track it. Write what you do, do what you write.
What you write is a virtual version of what you do. In the past, they are on papers. Now they are in computer data storages.
This virtual version of the real world supposed to be easier to process, aggregate, simulate, extract, to produce required information in decision making process. To be useful, they must have adequate accuracy and precision.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2021 02:58:52
https://www.wired.co.uk/article/marcus-du-sautoy-maths-proofs
Quote
Maths nerds, get ready: an AI is about to write its own proofs
We'll see the first truly creative proof of a mathematical theorem written by an artificial intelligence – and soon

It might come as a surprise to some people that this prediction hasn’t already come to pass. Given that mathematics is a subject of logic and precision, it would seem to be perfect territory for a computer.

However, in 2021, we will see the first truly creative proof of a mathematical theorem by an artificial intelligence (AI). As a mathematician, this fills me with excitement and anxiety in equal measure. Excitement for the new insights that AI might give the mathematical community; anxiety that we human mathematicians might soon become obsolete. But part of this belief is based on a misconception about what a mathematician does.

More recently, techniques of machine learning have been used to gain an understanding from a database of successful proofs to generate more proofs. But although the proofs are new, they do not pass the test of exciting the mathematical mind. It’s the same for powerful algorithms, which can generate convincing short-form text, but are a long way from writing a novel.

But in 2021 I think we will see – or at least be close to – an algorithm with the ability to write its first mathematical story. Storytelling through the written word is based on millions of years of human evolution, and it takes a human many years to reach the maturity to write a novel. But mathematics is a much younger evolutionary development. A person immersed in the mathematical world can reach maturity quite quickly, which is why one sees mathematical breakthroughs made by young minds.


This is why I think that it won’t take long for an AI to understand the quality of the proofs we love and celebrate, before it too will be writing proofs. Perhaps, given its internal architecture, these may be mathematical theorems about networks – a subject that deserves its place on the shelves of the mathematical libraries we humans have been filling for centuries.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/03/2021 09:13:53
Quote
What is love and what defines art? Humans have theorized, debated, and argued over these questions for centuries. As researchers become closer and closer to boiling these concepts down to a science, A.I. projects become closer to becoming alternatives for romantic companions and artists in their own right.

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

0:00​ Introduction
0:50​ The Model Companion
11:02​ Can A.I. Make Real Art?
23:05​ The Autonomous Supercar
36:41​ The Hard Problem
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/03/2021 06:01:22
5 Crazy Simulations That Were Previously Impossible
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 03:20:22
https://scitechdaily.com/300-covid-19-machine-learning-models-have-been-developed-none-is-suitable-for-detecting-or-diagnosing/
Quote
Machine learning is a promising and potentially powerful technique for detection and prognosis of disease. Machine learning methods, including where imaging and other data streams are combined with large electronic health databases, could enable a personalized approach to medicine through improved diagnosis and prediction of individual responses to therapies.

“However, any machine learning algorithm is only as good as the data it’s trained on,” said first author Dr. Michael Roberts from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “Especially for a brand-new disease like COVID-19, it’s vital that the training data is as diverse as possible because, as we’ve seen throughout this pandemic, there are many different factors that affect what the disease looks like and how it behaves.”

“The international machine learning community went to enormous efforts to tackle the COVID-19 pandemic using machine learning,” said joint senior author Dr James Rudd, from Cambridge’s Department of Medicine. “These early studies show promise, but they suffer from a high prevalence of deficiencies in methodology and reporting, with none of the literature we reviewed reaching the threshold of robustness and reproducibility essential to support use in clinical practice.”

Many of the studies were hampered by issues with poor quality data, poor application of machine learning methodology, poor reproducibility, and biases in study design. For example, several training datasets used images from children for their ‘non-COVID-19’ data and images from adults for their COVID-19 data. “However, since children are far less likely to get COVID-19 than adults, all the machine learning model could usefully do was to tell the difference between children and adults, since including images from children made the model highly biased,” said Roberts.

Many of the machine learning models were trained on sample datasets that were too small to be effective. “In the early days of the pandemic, there was such a hunger for information, and some publications were no doubt rushed,” said Rudd. “But if you’re basing your model on data from a single hospital, it might not work on data from a hospital in the next town over: the data needs to be diverse and ideally international, or else you’re setting your machine learning model up to fail when it’s tested more widely.”

In many cases, the studies did not specify where their data had come from, or the models were trained and tested on the same data, or they were based on publicly available ‘Frankenstein datasets’ that had evolved and merged over time, making it impossible to reproduce the initial results.
Title: Re: How close are we from building a virtual universe?
Post by: Michael Sally on 29/03/2021 03:37:49
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.

I read a paper recently that describes the core of a photon as (x0,y0,z0)  which I thought was quite impressive in regards to accuracy and precision .

In regards to a virtual universe I consider that would be the smallest element of possible information , a tuple .

I would then consider that any  other elements of informational dimensions would be n-tuples  (xn,yn,zn) 

My reasoning for this is that any amount of information greater than the (x0,y0,z0) element , is expansive information .

(x1,y1,z1......n)

In simple terms a point of information reads true (absolute answers)  where expansions of information reads false (speculative) .

In example c reads false , c is based on our measurement system. In simultaneity a duration of 1.s is arguable .









Title: Re: How close are we from building a virtual universe?
Post by: Kryptid on 29/03/2021 05:54:14
In simple terms a point of information reads true (absolute answers)  where expansions of information reads false (speculative) .

According to what source?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 06:40:28
Where Did Bitcoin Come From? – The True Story
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 10:38:02
What a digital government looks like
Quote
What if you never had to fill out paperwork again? In Estonia, this is a reality: citizens conduct nearly all public services online, from starting a business to voting from their laptops, thanks to the nation's ambitious post-Soviet digital transformation known as "e-Estonia." One of the program's experts, Anna Piperal, explains the key design principles that power the country's "e-government" -- and shows why the rest of the world should follow suit to eradicate outdated bureaucracy and regain citizens' trust.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 10:55:29
MIT 6.S191: Evidential Deep Learning and Uncertainty

MIT Introduction to Deep Learning 6.S191: Lecture 7
Evidential Deep Learning and Uncertainty Estimation
Lecturer: Alexander Amini
January 2021

For all lectures, slides, and lab materials: http://introtodeeplearning.com​​

Lecture Outline
0:00​​ - Introduction and motivation
5:00​​ - Outline for lecture
5:50​ - Probabilistic learning
8:33​ - Discrete vs continuous target learning
14:12​ - Likelihood vs confidence
17:40​ - Types of uncertainty
21:15​ - Aleatoric vs epistemic uncertainty
22:35​ - Bayesian neural networks
28:55​ - Beyond sampling for uncertainty
31:40​ - Evidential deep learning
33:29​ - Evidential learning for regression and classification
42:05​ - Evidential model and training
45:06​ - Applications of evidential learning
46:25​ - Comparison of uncertainty estimation approaches
47:47​ - Conclusion
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/03/2021 12:04:30
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

Graph Databases Will Change Your Freakin' Life (Best Intro Into Graph Databases)

Quote
## WTF is a graph database
- Euler and Graph Theory
- Math -- it's hard, let's skip it
- It's about data -- lots of it
- But let's zoom in and look at the basics
## Relational model vs graph model
- How do we represent THINGS in DBs
- Relational vs Graph
- Nodes and Relationships
## Why use a graph over a relational DB or other NoSQL?
- Very simple compared to RDBMS, and much more flexible
- The real power is in relationship-focused data (most NoSQL dbs don't treat relationships as first-order)
- As related-ness and amount of data increases, so does advantage of Graph DBs
- Much closer to our whiteboard model

EVENT: Nodevember 2016

SPEAKER: Ed Finkler
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2021 13:10:09
https://scitechdaily.com/explainable-artificial-intelligence-for-decoding-regulatory-instructions-in-dna/
Quote
Opening the black box to uncover the rules of the genome’s regulatory code.
Researchers at the Stowers Institute for Medical Research, in collaboration with colleagues at Stanford University and Technical University of Munich, have developed advanced explainable artificial intelligence (AI) in a technical tour de force to decipher regulatory instructions encoded in DNA. In a report published online on February 18, 2021, in Nature Genetics, the team found that a neural network trained on high-resolution maps of protein-DNA interactions can uncover subtle DNA sequence patterns throughout the genome and provide a deeper understanding of how these sequences are organized to regulate genes.

Neural networks are powerful AI models that can learn complex patterns from diverse types of data such as images, speech signals, or text to predict associated properties with impressive high accuracy. However, many see these models as uninterpretable since the learned predictive patterns are hard to extract from the model. This black-box nature has hindered the wide application of neural networks to biology, where interpretation of predictive patterns is paramount.

One of the big unsolved problems in biology is the genome’s second code—its regulatory code. DNA bases (commonly represented by letters A, C, G, and T) encode not only the instructions for how to build proteins, but also when and where to make these proteins in an organism. The regulatory code is read by proteins called transcription factors that bind to short stretches of DNA called motifs. However, how particular combinations and arrangements of motifs specify regulatory activity is an extremely complex problem that has been hard to pin down.

Now, an interdisciplinary team of biologists and computational researchers led by Stowers Investigator Julia Zeitlinger, PhD, and Anshul Kundaje, PhD, from Stanford University, have designed a neural network—named BPNet for Base Pair Network—that can be interpreted to reveal regulatory code by predicting transcription factor binding from DNA sequences with unprecedented accuracy. The key was to perform transcription factor-DNA binding experiments and computational modeling at the highest possible resolution, down to the level of individual DNA bases. This increased resolution allowed them to develop new interpretation tools to extract the key elemental sequence patterns such as transcription factor binding motifs and the combinatorial rules by which motifs function together as a regulatory code.

Quote
“More traditional bioinformatics approaches model data using pre-defined rigid rules that are based on existing knowledge. However, biology is extremely rich and complicated,” says Avsec. “By using neural networks, we can train much more flexible and nuanced models that learn complex patterns from scratch without previous knowledge, thereby allowing novel discoveries.“

BPNet’s network architecture is similar to that of neural networks used for facial recognition in images. For instance, the neural network first detects edges in the pixels, then learns how edges form facial elements like the eye, nose, or mouth, and finally detects how facial elements together form a face. Instead of learning from pixels, BPNet learns from the raw DNA sequence and learns to detect sequence motifs and eventually the higher-order rules by which the elements predict the base-resolution binding data.

Once the model is trained to be highly accurate, the learned patterns are extracted with interpretation tools. The output signal is traced back to the input sequences to reveal sequence motifs. The final step is to use the model as an oracle and systematically query it with specific DNA sequence designs, similar to what one would do to test hypotheses experimentally, to reveal the rules by which sequence motifs function in a combinatorial manner.

“The beauty is that the model can predict way more sequence designs that we could test experimentally,” Zeitlinger says. “Furthermore, by predicting the outcome of experimental perturbations, we can identify the experiments that are most informative to validate the model.” Indeed, with the help of CRISPR gene editing techniques, the researchers confirmed experimentally that the model’s predictions were highly accurate.

Since the approach is flexible and applicable to a variety of different data types and cell types, it promises to lead to a rapidly growing understanding of the regulatory code and how genetic variation impacts gene regulation. Both the Zeitlinger Lab and the Kundaje Lab are already using BPNet to reliably identify binding motifs for other cell types, relate motifs to biophysical parameters, and learn other structural features in the genome such as those associated with DNA packaging. To enable other scientists to use BPNet and adapt it for their own needs, the researchers have made the entire software framework available with documentation and tutorials.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2021 13:13:23
In regards to a virtual universe I consider that would be the smallest element of possible information , a tuple .
AFAIK, the smallest unit of information is a bit, or binary digit, which is supposed to reduce uncertainty by half.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2021 13:55:32
Graph databases: The best kept secret for effective AI
Quote
Emil Eifrem, Neo4j Co-Founder and CEO explains why connected data is the key to more accurate, efficient and credible learning systems. Using real world use cases ranging from space engineering to investigative journalism, he will outline how a relationships-first approach adds context to data - the key to explainable, well-informed predictions.
What I've tried to do previously was basically creating a graph database using a standard relational database system. If only I knew this earlier, I might have saved significant amount of my time and effort. It makes me feel like I tried to reinvent the wheel.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2021 14:11:48
What Is Edge Computing?
Quote
Another jargon busting video - Here I explain in simple terms what edge computing or sometimes called fog computing is. I provide practical examples of computing at the edge of the network - in phones, cameras, etc.
In the future, human brains will be part of edge computing network, which itself is part of universal consciousness running the virtual universe. No single human individual has the capability of running the kernel and core processes of the virtual universe which would run on cloud computing servers due to sheer data size and parallel data processing power requirement. To make significant contributions, we would have to establish direct communication interface with computer to increase data exchange rate, which would break the natural limit of biomechanical systems currently used, such as typing, hand gesture, reading, hearing, or voice commands.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2021 07:01:35
A Practical Guide to Graph Databases - David Bechberger
Quote
With the emergence of offerings on both AWS (Neptune) and Azure (CosmosDB) within the past year it is fair to say that graph databases are of the hottest trends and that they are here to stay. So what are graph databases all about then? You can read article after article about how great they are and that they will solve all your problems better than your relational database but its difficult to really find any practical information about them.
This talk will start with a short primer on graph databases and the ecosystem but will then quickly transition to discussing the practical aspects of how to apply them to solve real world business problems. We will dive into what makes a good use case and what does not. We will then follow this up with some real world examples of some of the common patterns and anti-patterns of using graph databases. If you haven't been scared away by this point we will end by showing you some of the powerful insights that graph databases can provide you.
I wish I knew this back then so I can save my time trying to emulate a graph database using traditional relational database.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2021 06:43:26
Quote
Edge computing places workloads closer to where data is created and where actions need to be taken. It addresses the unprecedented scale and complexity of data created by connected devices. As more and more data comes in from remote IoT edge devices and servers, it’s important to act on the data quickly. Acting quickly can help companies seize new business opportunities, increase operational efficiency and improve customer experiences.

In this video, Rob High, IBM Fellow and CTO, provides insights into the basic concepts and key use cases of edge computing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 11:12:57
https://singularityhub.com/2021/04/04/openais-gpt-3-algorithm-is-now-producing-billions-of-words-a-day/

Quote
When OpenAI released its huge natural-language algorithm GPT-3 last summer, jaws dropped. Coders and developers with special access to an early API rapidly discovered new (and unexpected) things GPT-3 could do with naught but a prompt. It wrote passable poetry, produced decent code, calculated simple sums, and with some edits, penned news articles.

All this, it turns out, was just the beginning. In a recent blog post update, OpenAI said that tens of thousands of developers are now making apps on the GPT-3 platform.

Over 300 apps (and counting) use GPT-3, and the algorithm is generating 4.5 billion words a day for them.
Quote
The Coming Torrent of Algorithmic Content
Each month, users publish about 70 million posts on WordPress, which is, hands down, the dominant content management system online.

Assuming an average article is 800 words long—which is speculation on my part, but not super long or short—people are churning out some 56 billion words a month or 1.8 billion words a day on WordPress.

If our average word count assumption is in the ballpark, then GPT-3 is producing over twice the daily word count of WordPress posts. Even if you make the average more like 2,000 words per article (which seems high to me) the two are roughly equivalent.

Now, not every word GPT-3 produces is a word worth reading, and it’s not necessarily producing blog posts (more on applications below). But in either case, just nine months in, GPT-3’s output seems to foreshadow a looming torrent of algorithmic content.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 13:06:06
https://siliconangle.com/2021/04/10/new-era-innovation-moores-law-not-dead-ai-ready-explode/
Quote
Processing goes to the edge – networks and storage become the bottlenecks
We recently reported Microsoft Corp. Chief Executive Satya Nadella’s epic quote that we’ve reached peak centralization. The graphic below paints a picture that is telling. We just shared above that processing power is accelerating at unprecedented rates. And costs are dropping like a rock. Apple’s A14 costs the company $50 per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators that will optimize energy use and save 10% annually on power consumption. They said that chip will cost $1 — a buck to shave 10% off your electricity bill from the fridge.
(https://d2axcg2cspgbkk.cloudfront.net/wp-content/uploads/Breaking-Analysis_-Moores-Law-is-Accelerating-and-AI-is-Ready-to-Explode-3.jpg)
Quote
Processing is plentiful and cheap. But look at where the expensive bottlenecks are: networks and storage. So what does this mean?

It means that processing is going to get pushed to the edge – wherever the data is born. Storage and networking will become increasingly distributed and decentralized. With custom silicon and processing power placed throughout the system with AI embedded to optimize workloads for latency, performance, bandwidth, security and other dimensions of value.

And remember, most of the data – 99% – will stay at the edge. We like to use Tesla Inc. as an example. The vast majority of data a Tesla car creates will never go back to the cloud. It doesn’t even get persisted. Tesla saves perhaps five minutes of data. But some data will connect occasionally back to the cloud to train AI models – we’ll come back to that.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 13:08:21
Quote
Massive increases in processing power and cheap silicon will power the next wave of AI, machine intelligence, machine learning and deep learning.
Quote
We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this:

There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor.

It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time.

Deep learning is a more advanced type of machine learning that uses more complex math.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2021 12:20:28

https://pub.towardsai.net/openai-brings-introspection-to-reinforcement-learning-agents-39cbe4cf2af3

Quote
OpenAI Brings Introspection to Reinforcement Learning Agents
The research around Evolved Policy Gradients attempts to recreate introspection in reinforcement learning models.

Introspection is one of those magical cognitive abilities that differentiate humans from other species. Conceptually, introspection can be defined as the ability to examine conscious thoughts and feelings. Introspection also plays a pivotal role in how humans learn. Have you ever tried to self-learn a new skill such as learning a new language? Even without any external feedback, you can quickly assess whether you are making progress on aspects such as vocabulary or pronunciation. Wouldn’t it be great if we could apply some of the principles of introspection to artificial intelligence(AI) discplines such as reinforcement learning (RL)?
The magic of introspection comes from the fact that humans have access to very well shaped internal reward functions, derived from prior experience on other tasks, and through the course of biological evolution. That model highly contrasts with RL agents that are fundamentally coded to start from scratch on any learning task relying mainly on external feedback. Not surprisingly, most RL models take substantially more time than humans to learn similar tasks. Recently, researchers from OpenAI published a new paper that proposes a method to address this challenge by creating RL models that know what it means to make progress on a new task, by having experienced making progress on similar tasks in the past.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2021 13:53:38
How Graph Technology Is Changing Artificial Intelligence and Machine Learning


Quote
Graph enhancements to Artificial Intelligence and Machine Learning are changing the landscape of intelligent applications. Beyond improving accuracy and modeling speed, graph technologies make building AI solutions more accessible. Join us to hear about 6 areas at the forefront of graph enhanced AI and ML, and find out which techniques are commonly used today and which hold the potential for disrupting industries.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2021 14:37:41

Edge computing definitions and concepts. This non-technical video focuses on edge computing and cloud computing, as well as edge computing and the deployment of vision recognition and other AI applications. Also introduced are mesh networks, SBC (single board computer) edge hardware, and fog computing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2021 21:01:23
https://syncedreview.com/2021/04/07/deepmind-microsoft-allen-ai-uw-researchers-convert-pretrained-transformers-into-rnns-lowering-memory-cost-while-retaining-high-accuracy/
Quote
Powerful transformer models have been widely used in autoregressive generation, where they have advanced the state-of-the-art beyond recurrent neural networks (RNNs). However, because the output words for these models are incrementally predicted conditioned on the prefix, the generation requires quadratic time complexity with regard to sequence length.

As the performance of transformer models increasingly relies on large-scale pretrained transformers, this long sequence generation issue has become increasingly problematic. To address this, a research team from the University of Washington, Microsoft, DeepMind and Allen Institute for AI have developed a method to convert a pretrained transformer into an efficient RNN. Their Transformer-to-RNN (T2R) approach speeds up generation and reduces memory cost.
Quote
Overall, the results validated that T2R achieves efficient autoregressive generation while retaining high accuracy, proving that large-scale pretrained models can be compressed into efficient inference models that facilitate downstream applications.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/04/2021 13:37:43
https://techxplore.com/news/2021-04-deep-learning-code-humans.html
Toward deep-learning models that can reason about code more like humans
Quote
Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that's driving the push to automate some of the easier tasks that take up their time.
Quote
A machine capable of programming itself once seemed like science fiction. But an exponential rise in computing power, advances in natural language processing, and a glut of free code on the internet have made it possible to automate at least some aspects of software design.
Trained on GitHub and other program-sharing websites, code-processing models learn to generate programs just as other language models learn to write news stories or poetry. This allows them to act as a smart assistant, predicting what software developers will do next, and offering an assist. They might suggest programs that fit the task at hand, or generate program summaries to document how the software works. Code-processing models can also be trained to find and fix bugs. But despite their potential to boost productivity and improve software quality, they pose security risks that researchers are just starting to uncover.
Quote
"Our framework for attacking the model, and retraining it on those particular exploits, could potentially help code-processing models get a better grasp of the program's intent," says Liu, co-senior author of the study. "That's an exciting direction waiting to be explored."

In the background, a larger question remains: what exactly are these black-box deep-learning models learning? "Do they reason about code the way humans do, and if not, how can we make them?" says O'Reilly. "That's the grand challenge ahead for us."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/04/2021 12:10:32
Top Use Cases of Graph Databases
Quote
Jonny Cheetham, Sales Director: Graph databases are a rising tide in the world of big data insights, and the enterprises that tap into their power realize significant competitive advantages.
So how might your enterprise leverage graph databases to generate competitive insights and derive significant business value from your connected data? This webinar will show you the top five most impactful and profitable use cases of graph databases.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/04/2021 08:03:07
Do Neural Networks Think Like Our Brain? OpenAI Answers!
https://openai.com/blog/multimodal-neurons/
Quote
Multimodal Neurons in Artificial Neural Networks
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.

Quote
Our discovery of multimodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems—abstraction. We discover that the highest layers of CLIP organize images as a loose semantic collection of ideas, providing a simple explanation for both the model’s versatility and the representation’s compactness.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/04/2021 05:28:20
3D deep neural network precisely reconstructs freely-behaving animal's movements
Quote
Animals are constantly moving and behaving in response to instructions from the brain. But while there are advanced techniques for measuring these instructions in terms of neural activity, there is a paucity of techniques for quantifying the behavior itself in freely moving animals. This inability to measure the key output of the brain limits our understanding of the nervous system and how it changes in disease.

A new study by researchers at Duke University and Harvard University introduces an automated tool that can readily capture behavior of freely behaving animals and precisely reconstruct their three dimensional (3D) pose from a single video camera and without markers.

The April 19 study in Nature Methods led by Timothy W. Dunn, Assistant Professor, Duke University, and Jesse D. Marshall, postdoctoral researcher, Harvard University, describes a new 3D deep-neural network, DANNCE (3-Dimensional Aligned Neural Network for Computational Ethology). The study follows the team's 2020 study in Neuron which revealed the groundbreaking behavioral monitoring system, CAPTURE (Continuous Appendicular and Postural Tracking using Retroreflector Embedding), which uses motion capture and deep learning to continuously track the 3D movements of freely behaving animals. CAPTURE yielded an unprecedented detailed description of how animals behave. However, it required using specialized hardware and attaching markers to animals, making it a challenge to use.

"With DANNCE we relieve this requirement," said Dunn. "DANNCE can learn to track body parts even when they can't be seen, and this increases the types of environments in which the technique can be used. We need this invariance and flexibility to measure movements in naturalistic environments more likely to elicit the full and complex behavioral repertoire of these animals."

DANNCE works across a broad range of species and is reproducible across laboratories and environments, ensuring it will have a broad impact on animal—and even human—behavioral studies. It has a specialized neural network tailored to 3D pose tracking from video. A key aspect is that its 3D feature space is in physical units (meters) rather than camera pixels. This allows the tool to more readily generalize across different camera arrangements and laboratories. In contrast, previous approaches to 3D pose tracking used neural networks tailored to pose detection in two-dimensions (2D), which struggled to readily adapt to new 3D viewpoints.

https://techxplore.com/news/2021-04-3d-deep-neural-network-precisely.html

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/04/2021 14:46:49
Do Neural Networks Think Like Our Brain? OpenAI Answers!
https://openai.com/blog/multimodal-neurons/
Some of new AI models are getting closer to human intelligence. It's shown that they make similar types of mistakes in visual classifications. Previously, other AI models made mistakes that no human will ever make, which means that their working principles are significantly different. So it's clearly a progress which seems to make Ray Kurzweil's predictions about human level intelligence AI in 2029 more plausible.
Previously, other AI researchers predicted that Conquering Go would take 100 years. It's proven false by AlphaGo. That prediction was a product of linear thinking, which grossly deviates from real technological advancements that look more like exponential or even double exponential curve.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/04/2021 09:54:50
https://www.nextplatform.com/2021/04/22/vertically-unchallenged/
Quote
Components make compute and storage servers, and servers with application plane, control plane, and data plane software running atop them or alongside them make systems, and workflows across systems make platforms. The end state goal of any system architect is really creating a platform. If you don’t have an integrated platform, then what you have is an IT nightmare.

That is what four decades of distributed computing has really taught us, if you boil off all the pretty water that obscures with diffraction and bubbling and look very hard at the bottom of the pot into the substrate of bullshit left behind.

Maybe we should have something called a platform architect? And maybe they don’t have those titles at the big hyperscalers and public cloud builders, but that is, in fact, what these companies are doing. And for those of us who have been around for a while, it is with a certain amount of humor that we are seeing the rise of the most vertically integrated, proprietary platforms that the world has seen since the IBM System/360 mainframe and the DEC VAX, IBM AS/400, and HP 3000 – there was no “E” back then – minicomputers in the 1960s and the 1970s.
The vision of integrated system has been around for decades now. And it will improve further for decades to come.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/04/2021 22:40:32
Quote
We are starting to see more exascale and large supercomputing sites benchmark and project on deep learning capabilities of systems designed for HPC applications but only a few have run system-wide tests to see how their machines might stack up against standard CNN and other metrics.

In China, however, we finally have some results about the potential for leadership-class systems to tackle deep learning. That is interesting in itself, but in the case of AI benchmarks on the Tianhe-3 exascale prototype supercomputer, we also get a sense of how that system’s unique Arm-based architecture performs for math that is quite different than that required for HPC modeling/simulation.
Quote
It is hard to tell what to expect from this novel architecture in terms of AI workloads but for us, the news is that the system is operational and teams are at least exploring what might be possible in scaling deep learning using an Arm-based architecture and unique interconnect. It also shows that there is still work to be done to optimize Arm-based processors for even routine AI benchmarks to keep pace with other companies with CPUs and accelerators.
http://www.nextplatform.com/2021/04/19/chinas-exascale-prototype-supercomputer-tests-ai-workloads/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/04/2021 21:09:12
Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’
Quote
Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.

How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.

Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.

Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.

The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.

Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.

The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.

https://singularityhub.com/2021/04/26/the-next-supercomputer-a-blueprint-for-an-optoelectronic-brain/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/04/2021 07:33:10
https://www.nature.com/articles/d41586-021-00530-0
Robo-writers: the rise and risks of language-generating AI
A remarkable AI can write like humans — but with no understanding of what it’s saying.
Quote
In June 2020, a new and powerful artificial intelligence (AI) began dazzling technologists in Silicon Valley. Called GPT-3 and created by the research firm OpenAI in San Francisco, California, it was the latest and most powerful in a series of ‘large language models’: AIs that generate fluent streams of text after imbibing billions of words from books, articles and websites. GPT-3 had been trained on around 200 billion words, at an estimated cost of tens of millions of dollars.

The developers who were invited to try out GPT-3 were astonished. “I have to say I’m blown away,” wrote Arram Sabeti, founder of a technology start-up who is based in Silicon Valley. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”

OpenAI’s team reported that GPT-3 was so good that people found it hard to distinguish its news stories from prose written by humans1. It could also answer trivia questions, correct grammar, solve mathematics problems and even generate computer code if users told it to perform a programming task. Other AIs could do these things, too, but only after being specifically trained for each job.

Large language models are already business propositions. Google uses them to improve its search results and language translation; Facebook, Microsoft and Nvidia are among other tech firms that make them. OpenAI keeps GPT-3’s code secret and offers access to it as a commercial service. (OpenAI is legally a non-profit company, but in 2019 it created a for-profit subentity called OpenAI LP and partnered with Microsoft, which invested a reported US$1 billion in the firm.) Developers are now testing GPT-3’s ability to summarize legal documents, suggest answers to customer-service enquiries, propose computer code, run text-based role-playing games or even identify at-risk individuals in a peer-support community by labelling posts as cries for help.

(https://media.nature.com/lw800/magazine-assets/d41586-021-00530-0/d41586-021-00530-0_18907396.png)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/04/2021 14:23:58
https://www.nature.com/articles/s41586-021-03451-0
Quote
Towards complete and error-free genome assemblies of all vertebrate species

High-quality and complete reference genome assemblies are fundamental for the application of genomics to biology, disease, and biodiversity conservation. However, such assemblies are available for only a few non-microbial species1,2,3,4. To address this issue, the international Genome 10K (G10K) consortium5,6 has worked over a five-year period to evaluate and develop cost-effective methods for assembling highly accurate and nearly complete reference genomes. Here we present lessons learned from generating assemblies for 16 species that represent six major vertebrate lineages. We confirm that long-read sequencing technologies are essential for maximizing genome quality, and that unresolved complex repeats and haplotype heterozygosity are major sources of assembly error when not handled correctly. Our assemblies correct substantial errors, add missing sequence in some of the best historical reference genomes, and reveal biological discoveries. These include the identification of many false gene duplications, increases in gene sizes, chromosome rearrangements that are specific to lineages, a repeated independent chromosome breakpoint in bat genomes, and a canonical GC-rich pattern in protein-coding genes and their regulatory regions. Adopting these lessons, we have embarked on the Vertebrate Genomes Project (VGP), an international effort to generate high-quality, complete reference genomes for all of the roughly 70,000 extant vertebrate species and to help to enable a new era of discovery across the life sciences.
Quote
The Vertebrate Genomes Project
Building on this initial set of assembled genomes and the lessons learned, we propose to expand the VGP to deeper taxonomic phases, beginning with phase 1: representatives of approximately 260 vertebrate orders, defined here as lineages separated by 50 million or more years of divergence from each other. Phase 2 will encompass species that represent all approximately 1,000 vertebrate families; phase 3, all roughly 10,000 genera; and phase 4, nearly all 71,657 extant named vertebrate species (Supplementary Note 5, Supplementary Fig. 3). To accomplish such a project within 10 years, we will need to scale up to completing 125 genomes per week, without sacrificing quality. This includes sample permitting, high molecular weight DNA extractions, sequencing, meta-data tracking, and computational infrastructure. We will take advantage of continuing improvements in genome sequencing technology, assembly, and annotation, including advances in PacBio HiFi reads, Oxford Nanopore reads, and replacements for 10XG reads (Supplementary Note 6), while addressing specific scientific questions at increasing levels of phylogenetic refinement. Genomic technology advances quickly, but we believe the principles of our pipeline and the lessons learned will be applicable to future efforts. Areas in which improvement is needed include more accurate and complete haplotype phasing, base-call accuracy, and resolution of long repetitive regions such as telomeres, centromeres, and sex chromosomes. The VGP is working towards these goals and making all data, protocols, and pipelines openly available (Supplementary Notes 5, 7).

Despite remaining imperfections, our reference genomes are the most complete and highest quality to date for each species sequenced, to our knowledge. When we began to generate genomes beyond the Anna’s hummingbird in 2017, only eight vertebrate species in GenBank had genomes that met our target continuity metrics, and none were haplotype phased (Supplementary Table 23). The VGP pipeline introduced here has now been used to complete assemblies of more than 130 species of similar or higher quality (Supplementary Note 5; BioProject PRJNA489243). We encourage the scientific community to use and evaluate the assemblies and associated raw data, and to provide feedback towards improving all processes for complete and error-free assembled genomes of all species.
It seems like in the future we don't need zoos filled with captivated animals just to preserve biodiversity.  However, genetic information alone is not enough to reproduce fully functional organisms. Compatible epigenetic environments are necessary. An embryo of tiger inside a chicken egg is unlikely to grow into baby tiger.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/04/2021 23:44:53
The least a human individual can contribute to the society without doing anything is to provide backup of genetic and epigenetic information, which also gives biodiversity. This contribution is insignificant when there are billions of people, but would be important when there are only few left.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/05/2021 07:50:42
Elon Musk (@elonmusk) tweeted at 5:45 AM on Fri, Apr 30, 2021:
Quote
A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers
But it may not be the case anymore in the future. At least there are 2 things can change that.
When most vehicles are already autonomous.
When VTOL flying cars become abundant, which would make roads irrelevant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/05/2021 05:33:00
I'd share a recent newsfeed from my e-mail. It seems similar to how brains have evolved.

MorphNet is a Google Model to Build Faster and Smaller Neural Networks
The model makes inroads in the optimization of the architecture of neural networks.

Quote
Designing deep neural networks these days is more art than science. In the deep learning space, any given problem can be addressed with a fairly large number of neural network architectures. In that sense, designing a deep neural network from the ground up for a given problem can result incredibly expensive in terms of time and computational resources. Additionally, given the lack of guidance in the space, we often end up producing neural network architectures that are suboptimal for the task at hand. About two years ago, artificial intelligence(AI) researchers from Google published a paper proposing a method called MorphNet to optimize the design of deep neural networks.
Quote
Automated neural network design is one of the most active areas of research in the deep learning space. The most traditional approach to neural network architecture design involves sparse regularizers using methods such as L1. While this technique has proven effective on reducing the number of connections in a neural network, quite often ends up producing suboptimal architectures. Another approach involves using search techniques to find an optimal neural network architecture for a given problem. That method has been able to generate highly optimized neural network architectures but it requires an exorbitant number of trial and error attempts which often results computationally prohibited. As a result, neural network architecture search has only proven effective in very specialized scenarios. Factoring the limitations of the previous methods, we can arrive to three key characteristics of effective automated neural network design techniques:
a) Scalability: The automated design approach should be scalable to large datasets and models.
b) Multi-Factor Optimization: An automated method should be able to optimized the structure of a deep neural network targeting specific resources.
c) Optimal: An automated neural network design should produce an architecture that improves performance while reducing the usage of the target resource.

Quote
MorphNet
Google’s MorphNet approaches the problem of automated neural network architecture design from a slightly different angle. Instead of trying to try numerous architectures across a large design space, MorphNet start with an existing architecture for a similar problem and, in one shot, optimize it for the task at hand.
MorphNet optimizes a deep neural network by interactively shrinking and expanding its structure. In the shrinking phase, MorphNet identifies inefficient neurons and prunes them from the network by applying a sparsifying regularizer such that the total loss function of the network includes a cost for each neuron. Just doing this typically results on a neural network that consumes less of the targeted resource, but typically achieves a lower performance. However, MorphNet applies a specific shrinking model that not only highlights which layers of a neural network are over-parameterized, but also which layers are bottlenecked. Instead of applying a uniform cost per neuron, MorphNet calculates a neuron cost with respect to the targeted resource. As training progresses, the optimizer is aware of the resource cost when calculating gradients, and thus learns which neurons are resource-efficient and which can be removed.
https://medium.com/@jrodthoughts/morphnet-is-a-google-model-to-build-faster-and-smaller-neural-networks-f890276da456
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/05/2021 09:49:59
DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
Quote
Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features without any labels. Facebook AI's new system, DINO, combines advances in Self-Supervised Learning for Computer Vision with the new Vision Transformer (ViT) architecture and achieves impressive results without any labels. Attention maps can be directly interpreted as segmentation maps, and the obtained representations can be used for image retrieval and zero-shot k-nearest neighbor classifiers (KNNs).

OUTLINE:
0:00​ - Intro & Overview
6:20​ - Vision Transformers
9:20​ - Self-Supervised Learning for Images
13:30​ - Self-Distillation
15:20​ - Building the teacher from the student by moving average
16:45​ - DINO Pseudocode
23:10​ - Why Cross-Entropy Loss?
28:20​ - Experimental Results
33:40​ - My Hypothesis why this works
38:45​ - Conclusion & Comments
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/05/2021 13:08:47
https://theconversation.com/engineered-viruses-can-fight-the-rise-of-antibiotic-resistant-bacteria-154337
Quote
As the world fights the SARS-CoV-2 virus causing the COVID-19 pandemic, another group of dangerous pathogens looms in the background. The threat of antibiotic-resistant bacteria has been growing for years and appears to be getting worse. If COVID-19 taught us one thing, it’s that governments should be prepared for more global public health crises, and that includes finding new ways to combat rogue bacteria that are becoming resistant to commonly used drugs.

In contrast to the current pandemic, viruses may be be the heroes of the next epidemic rather than the villains. Scientists have shown that viruses could be great weapons against bacteria that are resistant to antibiotics.
Quote
Since the discovery of penicillin in 1928, antibiotics have changed modern medicine. These small molecules fight off bacterial infections by killing or inhibiting the growth of bacteria. The mid-20th century was called the Golden Age for antibiotics, a time when scientists were discovering dozens of new molecules for many diseases.

This high was soon followed by a devastating low. Researchers saw that many bacteria were evolving resistance to antibiotics. Bacteria in our bodies were learning to evade medicine by evolving and mutating to the point that antibiotics no longer worked.

As an alternative to antibiotics, some researchers are turning to a natural enemy of bacteria: bacteriophages. Bacteriophages are viruses that infect bacteria. They outnumber bacteria 10 to 1 and are considered the most abundant organisms on the planet.

Bacteriophages, also known as phages, survive by infecting bacteria, replicating and bursting out from their host, which destroys the bacterium.

Harnessing the power of phages to fight bacteria isn’t a new idea. In fact, the first recorded use of so-called phage therapy was over a century ago. In 1919, French microbiologist Félix d'Hérelle used a cocktail of phages to treat children suffering from severe dysentery.

D'Hérelle’s actions weren’t an accident. In fact, he is credited with co-discovering phages, and he pioneered the idea of using bacteria’s natural enemies in medicine. He would go on to stop cholera outbreaks in India and plague in Egypt.

Phage therapy is not a standard treatment you can find in your local hospital today. But excitement about phages has grown over the past few years. In particular, scientists are using new knowledge about the complex relationship between phages and bacteria to improve phage therapy. By engineering phages to better target and destroy bacteria, scientists hope to overcome antibiotic resistance.
Quote
Now scientists are hoping to use the knowledge about CRISPR systems to engineer phages to destroy dangerous bacteria.

When the engineered phage locates specific bacteria, the phage injects CRISPR proteins inside the bacteria, cutting and destroying the microbes’ DNA. Scientists have found a way to turn defense into offense. The proteins normally involved in protecting against viruses are repurposed to target and destroy the bacteria’s own DNA. The scientists can specifically target the DNA that makes the bacteria resistant to antibiotics, making this type of phage therapy extremely effective.
Quote
Science is only half of the solution when it comes to fighting these microbes. Commercialization and regulation are important to ensure that this technology is in society’s toolkit for fending off a worldwide spread of antibiotic-resistant bacteria.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/05/2021 14:58:08
https://neurosciencenews.com/3d-neuroimaging-18378/
New Imaging Technique Captures How Brain Moves in Stunning Detail
Quote
Summary: A new neuroimaging technique captures the brain in motion in real-time, generating a 3D view and with improved detail. The new technology could help clinicians to spot hard-to-detect neurological conditions.

Source: Stevens Institute of Technology
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/05/2021 05:53:14
How Close Are We to Harnessing Synthetic Life?

Quote
Scientists are exploring how to edit genomes and even create brand new ones that never existed before, but how close are we to harnessing synthetic life?

Scientists have made major strides when it comes to understanding the base code that underlies all living things—but what if we could program living cells like software?

The principle behind synthetic biology, the emerging study of building living systems, lies in this ability to synthesize life. An ability to create animal products, individualized medical therapies, and even transplantable organs, all starting with synthetic DNA and cells in a lab.

There are two main schools of thought when it comes to synthesizing life: building artificial cells from the bottom-up or engineering microorganisms so significantly that it resynthesizes and redesigns the genome.

With genetic engineering tools becoming more and more accessible, researchers want to use these synthesized genomes to enhance human health with regards to things like detecting infections or environmental pollutants. Bacterial cells can be engineered that will detect toxic chemicals.

And these synthesized bacteria could potentially protect us from, for example, consuming toxins in contaminated water.

The world of synthetic biology goes beyond human health though, it can be used in a variety of industries, including fashion. Researchers hope to come up with lab-made versions of materials like leather or silk.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/05/2021 05:58:24
It's Alive, But Is It Life: Synthetic Biology and the Future of Creation
Quote
For decades, biologists have read and edited DNA, the code of life. Revolutionary developments are giving scientists the power to write it. Instead of tinkering with existing life forms, synthetic biologists may be on the verge of writing the DNA of a living organism from scratch. In the next decade, according to some, we may even see the first synthetic human genome. Join a distinguished group of synthetic biologists, geneticists and bioengineers who are edging closer to breathing life into matter.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Original Program Date: June 4, 2016
MODERATOR: Robert Krulwich
PARTICIPANTS: George Church, Drew Endy, Tom Knight, Pamela Silver
Quote
Synthetic Biology and the Future of Creation 00:00​

Participant Intros 3:25​

Ordering DNA from the internet 8:10​
 
How much does it cost to make a synthetic human? 13:04​

Why is yeast the best catalyst 20:10​

How George Church printed 90 billion copies of his book 26:05​

Creating synthetic rose oil 28:35​

Safety engineering and synthetic biology 37:15​

Do we want to be invaded by bad bacteria? 45:26​

Do you need a human gene's to create human cells? 55:09​

The standard of DNA sequencing in utero 1:02:27​

The science community is divided by closed press meetings 1:11:30​

The Human Genome Project. What is it? 1:21:45​
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/05/2021 05:39:05
DeepMind Wants to Reimagine One of the Most Important Algorithms in Machine Learning.

In one of the most important papers this year, DeepMind proposed a multi-agent structure to redefine PCA.
Quote
Principal component analysis(PCA) is one of the key algorithms that are part of any machine learning curriculum. Initially created in the early 1900s, PCA is a fundamental algorithm to understand data in high-dimensional spaces which are common in deep learning problems. More than a century after its invention, PCA is such a key part of modern deep learning frameworks that very few question it there could be a better approach. Just a few days ago, DeepMind published a fascinating paper that looks to redefine PCA as a competitive multi-agent game called EigenGame.

Titled “EigenGame: PCA as a Nash Equilibrium”, the DeepMind work is one of those papers that you can’t resist to read just based on the title. Redefining PCA sounds ludicrous. And yet, DeepMind’s thesis makes perfect sense the minute you deep dive into it.

In recent years, PCA techniques have hit a bottleneck in large scale deep learning scenarios. Originally designed for mechanical devices, traditional PCA is formulated as an optimization problem which is hard to scale across large computational clusters. A multi-agent approach to PCA might be able to leverage vast computational resources and produce better optimizations in modern dep learning problems.
https://medium.com/@jrodthoughts/deepmind-wants-to-reimagine-one-of-the-most-important-algorithms-in-machine-learning-381884d42de
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/05/2021 12:55:16
Quote
The advent of Transformers in 2017 completely changed the world of neural networks. Ever since, the core concept of Transformers has been remixed, repackaged, and rebundled in several models. The results have surpassed the state of the art in several machine learning benchmarks. In fact, currently all top benchmarks in the field of natural language processing are dominated by Transformer-based models. Some of the Transformer-family models are BERT, ALBERT, and the GPT series of models.

In any machine learning model, the most important components of the training process are:
The code of the model — the components of the model and its configuration
The data to be used for training
The available compute power
With the Transformer family of models, researchers finally arrived at a way to increase the performance of a model infinitely: You just increase the amount of training data and compute power.

This is exactly what OpenAI did, first with GPT-2 and then with GPT-3. Being a well funded ($1 billion+) company, it could afford to train some of the biggest models in the world. A private corpus of 500 billion tokens was used for training the model, and approximately $50 million was spent in compute costs.

While the code for most of the GPT language models is open source, the model is impossible to replicate without the massive amounts of data and compute power. And OpenAI has chosen to withhold public access to its trained models, making them available via API to only a select few companies and individuals. Further, its access policy is undocumented, arbitrary, and opaque.

https://venturebeat.com/2021/05/15/gpt-3s-free-alternative-gpt-neo-is-something-to-be-excited-about/
Quote
The bottom line here is: GPT-Neo is a great open source alternative to GPT-3, especially given OpenAI’s closed access policy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/05/2021 12:03:57
https://bdtechtalks.com/2021/05/17/ibms-codenet-machine-learning-programming/
IBM’s Project CodeNet will test how far you can push AI to write software
Quote
IBM’s AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks. Called Project CodeNet, the dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning.

While there’s a scant chance that machine learning models built on the CodeNet dataset will make human programmers redundant, there’s reason to be hopeful that they will make developers more productive.
Quote
With Project CodeNet, the researchers at IBM have tried to create a multi-purpose dataset that can be used to train machine learning models for various tasks. CodeNet’s creators describe it as a “very large scale, diverse, and high-quality dataset to accelerate the algorithmic advances in AI for Code.”

The dataset contains 14 million code samples with 500 million lines of code written in 55 different programming languages. The code samples have been obtained from submissions to nearly 4,000 challenges posted on online coding platforms AIZU and AtCoder. The code samples include both correct and incorrect answers to the challenges.

One of the key features of CodeNet is the amount of annotation that has been added to the examples. Every one of the coding challenges included in the dataset has a textual description along with CPU time and memory limits. Every code submission has a dozen pieces of information, including the language, the date of submission, size, execution time, acceptance, and error types.

The researchers at IBM have also gone through great effort to make sure the dataset is balanced along different dimensions, including programming language, acceptance, and error types.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/05/2021 07:27:49
https://bdtechtalks.com/2021/05/13/machine-learning-dimensionality-reduction/
Machine learning: What is dimensionality reduction?
Quote
Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

Measuring a general intelligence and general consciousness are examples of Dimensionality reduction used to reduce multiple parameters into a single number.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:37:15
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries | WIRED Live
Quote
Deepmind, Co-founder and CEO, Demis Hassabis discusses how we can avoid bias being built into AI systems and what's next for DeepMind, including the future of protein folding, at WIRED Live 2020.

"If we build it right, AI systems could be less biased than we are."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:42:53
https://www.newscientist.com/article/2268496-people-can-answer-questions-about-their-dreams-without-waking-up
Quote
Talking to people while they are asleep can influence their dreams – and in some cases, the dreamer can respond without waking up.

Ken Paller at Northwestern University in Evanston, Illinois, and his colleagues found that people could answer questions and even solve maths problems while lucid dreaming – a state that typically occurs during rapid eye-movement (REM) sleep when the dreamer is aware of being in a dream, and is sometimes able to control it.

“We asked questions where we knew the answer because what we wanted to do is determine whether we were having good communication. We had to know if they were answering correctly,” says Paller.

The team asked dreamers yes-no questions relating to their backgrounds and experiences, along with simple maths problems involving addition and subtraction. The dreamers weren’t aware of what questions they would be asked before they went to sleep.

The dreamers, who had a range of experience with lucid dreaming, answered the questions correctly 29 times, incorrectly five times, and ambiguously 28 times by twitching their face muscles or moving their eyes. They didn’t respond on 96 occasions.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:45:15
Inside Google’s DeepMind Project: How AI Is Learning On Its Own
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:53:56
The AI Hardware Problem
Quote
The millennia-old idea of expressing signals and data as a series of discrete states had ignited a revolution in the semiconductor industry during the second half of the 20th century. This new information age thrived on the robust and rapidly evolving field of digital electronics. The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures.

THE MAC
In a digital neural network implementation, the weights and input data are stored in system memory and must be fetched and stored continuously through the sea of multiple-accumulate operations within the network. This approach results in most of the power being dissipated in fetching and storing model parameters and input data to the arithmetic logic unit of the CPU, where the actual multiply-accumulate operation takes place. A typical multiply-accumulate operation within a general-purpose CPU consumes more than two orders of magnitude greater than the computation itself.

GPUs
Their ability to processes 3D graphics requires a larger number of arithmetic logic units coupled to high-speed memory interfaces. This characteristic inherently made them far more efficient and faster for machine learning by allowing hundreds of multiple-accumulate operations to process simultaneously. GPUs tend to utilize floating-point arithmetic, using 32 bits to represent a number by its mantissa, exponent, and sign. Because of this, GPU targeted machine learning applications have been forced to use floating-point numbers.

ASICS
These dedicated AI chips are offer dramatically larger amounts of data movement per joule when compared to GPUs and general-purpose CPUs. This came as a result of the discovery that with certain types of neural networks, the dramatic reduction in computational precision only reduced network accuracy by a small amount. It will soon become infeasible to increase the number of multiply-accumulate units integrated onto a chip, or reduce bit- precision further.

LOW POWER AI

Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power.

Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math.

These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network.

As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks.

If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 07:23:41
Microsoft's ZeRO-Infinity Library Trains 32 Trillion Parameter AI Model
https://www.infoq.com/news/2021/05/microsoft-zero-infinity/
Quote
Microsoft recently announced ZeRO-Infinity, an addition to their open-source DeepSpeed AI training library that optimizes memory use for training very large deep-learning models. Using ZeRO-Infinity, Microsoft trained a model with 32 trillion parameters on a cluster of 32 GPUs, and demonstrated fine-tuning of a 1 trillion parameter model on a single GPU.

The DeepSpeed team described the new features in a recent blog post. ZeRO-Infinity is the latest iteration of the Zero Redundancy Optimizer (ZeRO) family of memory optimization techniques. ZeRO-Infinity introduces several new strategies for addressing memory and bandwidth constraints when training large deep-learning models, including: a new offload engine for exploiting CPU and Non-Volatile Memory express (NVMe) memory, memory-centric tiling to handle large operators without model-parallelism, bandwidth-centric partitioning for reducing bandwidth costs, and an overlap-centric design for scheduling data communication. According to the DeepSpeed team:
Quote
The improved ZeRO-Infinity offers the system capability to go beyond the GPU memory wall and train models with tens of trillions of parameters, an order of magnitude bigger than state-of-the-art systems can support. It also offers a promising path toward training 100-trillion-parameter models.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 11:58:59
https://www.cnbc.com/2021/05/27/europeans-want-to-replace-lawmakers-with-ai.html
More than half of Europeans want to replace lawmakers with AI, study says

Quote
Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI’s clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

LONDON — A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

IMO, politicians are more likely to sacrifice the best interests of their constituents to get their own best interest. While AI's decisions would depend on the terminal goal assigned to it, and the data fed into it. It makes alignment with the universal terminal goal a critical step in building an AI with such a huge power and responsibility.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 22:26:52
https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-19

Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs

Quote
Transformer architectures have come to dominate the natural language processing (NLP) field since their 2017 introduction. One of the only limitations to transformer application is the huge computational overhead of its key component — a self-attention mechanism that scales with quadratic complexity with regard to sequence length.

New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs.

Transformers’ self-attention mechanism enables inputs to be represented with higher-order units to flexibly capture diverse syntactic and semantic relationships in natural language. Researchers have long regarded the associated high complexity and memory footprint as an unavoidable trade-off on transformers’ impressive performance. But in the paper FNet: Mixing Tokens with Fourier Transforms, the Google team challenges this thinking with FNet, a novel model that strikes an excellent balance between speed, memory footprint and accuracy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/06/2021 12:21:21

https://bdtechtalks.com/2021/05/27/artificial-intelligence-neurons-assemblies/

A simple model of the brain provides new directions for AI research
Quote
Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.
Quote
The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.

A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “grandmother cell” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.

These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.”

Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.”
(https://i2.wp.com/bdtechtalks.com/wp-content/uploads/2021/05/brain-assemblies.jpg)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/06/2021 23:26:03
I bring the discussion here from my main thread to explore further into the detail.
Trial and error would be much cheaper, hence more efficient, if we could do it in a virtual environment, like computer simulation, if we can get it to be adequately accurate and precise in representing objective reality.

Adequately accurate and precise virtual representation of objective reality is what we commonly called knowledge. It's a form of data compression.
At the most fundamental level, knowledge consist of two types of data: nodes and edges. They are the data points and the relationship among them, respectively.

In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
In practice, we may think that we can make a statement precisely without leaving any uncertainty using finite bits of information. For example, x-x=0, and x/x=1, with seemingly 0 uncertainty.
On the other hand, to write ratio between circumference and diameter of a circle in decimal number accurately without uncertainty, infinite digits are required. What makes the difference here?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 02:49:39
https://en.wikipedia.org/wiki/Single_source_of_truth
Quote
In information systems design and theory, single source of truth (SSOT) is the practice of structuring information models and associated data schema such that every data element is mastered (or edited) in only one place. Any possible linkages to this data element (possibly in other areas of the relational schema or even in distant federated databases) are by reference only. Because all other locations of the data just refer back to the primary "source of truth" location, updates to the data element in the primary location propagate to the entire system without the possibility of a duplicate value somewhere being forgotten.

Deployment of an SSOT architecture is becoming increasingly important in enterprise settings where incorrectly linked duplicate or de-normalized data elements (a direct consequence of intentional or unintentional denormalization of any explicit data model) pose a risk for retrieval of outdated, and therefore incorrect, information. A common example would be the electronic health record, where it is imperative to accurately validate patient identity against a single referential repository, which serves as the SSOT. Duplicate representations of data within the enterprise would be implemented by the use of pointers rather than duplicate database tables, rows, or cells. This ensures that data updates to elements in the authoritative location are comprehensively distributed to all federated database constituencies in the larger overall enterprise architecture.
SSOT systems provide data that are authentic, relevant, and referable.

https://www.talend.com/resources/single-source-truth/
Quote
What is a single source of truth (SSOT)?
Single source of truth (SSOT) is a concept used to ensure that everyone in an organization bases business decisions on the same data. Creating a single source of truth is straightforward. To put an SSOT in place, an organization must provide relevant personnel with one source that stores the data points they need.

Data-driven decision making has placed never-before-seen levels of importance on collecting and analyzing data. While acting on data-derived business intelligence is essential for competitive brands today, companies often spend far too much time debating which numbers, invariably from different sources, are the right numbers to use. Metrics from social platforms may paint one picture of a company’s target demographics while vendor feedback or online questionnaires may say something entirely different. How are corporate leaders to decide whose data points to use in such a scenario?

Establishing a single source of truth eliminates this issue. Instead of debating which of many competing data sources should be used for making company decisions, everyone can use the same, unified source for all their data needs It provides data that can be used by anyone, in any way, across the entire organization.
Currently, effort to establish a single source of truth are becoming common in business organizations, as well as governments. But they are still limited for their internal usage, and seemingly independent from each other, although they share the same objective reality. When there are discrepancies, we would feel like there were alternative truth.
A common example I often see is  road closures by government which are not accurately represented in Google Maps.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 11:58:31
In practice, we may think that we can make a statement precisely without leaving any uncertainty using finite bits of information. For example, x-x=0, and x/x=1, with seemingly 0 uncertainty.
On the other hand, to write ratio between circumference and diameter of a circle in decimal number accurately without uncertainty, infinite digits are required. What makes the difference here?
Here is another example. We can say that the smallest prime number is 2, without leaving any uncertainty.
Square root of -1 is i
Speed of light through vacuum is 299792458 metres per second
We can also say that the ratio between circumference and diameter of a circle is π, with no uncertainty.
If someone says that a value equals e, we need more information as a context, whether it refers to Euler's number, or charge of electrons, or something else.
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 16:34:24
Calculating pi efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/06/2021 06:29:45
Quote
Sporting 1.75 trillion parameters, Wu Dao 2.0 is roughly ten times the size of Open AI's GPT-3.
https://www.engadget.com/chinas-gigantic-multi-modal-ai-is-no-one-trick-pony-211414388.html

Quote
When Open AI's GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

First off, Wu Dao is flat out enormous. It's been trained on 1.75 trillion parameters (essentially, the model's self-selected coefficients) which is a full ten times larger than the 175 billion GPT-3 was trained on and 150 billion parameters larger than Google's Switch Transformers.

With all that computing power comes a whole bunch of capabilities. Unlike most deep learning models which perform a single task — write copy, generate deep fakes, recognize faces, win at Go — Wu Dao is multi-modal, similar in theory to Facebook's anti-hatespeech AI or Google's recently released MUM. BAAI researchers demonstrated Wu Dao's abilities to perform natural language processing, text generation, image recognition, and image generation tasks during the lab's annual conference on Tuesday. The model can not only write essays, poems and couplets in traditional Chinese, it can both generate alt text based off of a static image and generate nearly photorealistic images based on natural language descriptions. Wu Dao also showed off its ability to power virtual idols (with a little help from Microsoft-spinoff XiaoIce) and predict the 3D structures of proteins like AlphaFold.

“The way to artificial general intelligence is big models and big computer,” Dr. Zhang Hongjiang, chairman of BAAI, said during the conference Tuesday. “What we are building is a power plant for the future of AI, with mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”
The article shows how close we are from building AGI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/06/2021 10:12:53
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Here is an example in our daily life. If I say to someone face to face that I found his ID card and I'm keeping it in the pocket of shirt that I'm wearing, he can quickly find it. But if I speak to someone over the phone, it won't be clear for him until he knows my location. The location can be stated as the name of the building, the address, or geographic coordinate as latitude and longitude. If I'm inside a tall building, the vertical position such as floor number or altitude is also necessary.
If I speak to an alien in another solar system, I would need to inform the position of planet earth and the sun. If the alien is from another galaxy, then I need to inform the position of the milky way too.
If I tell you that X=2Y, you get no new information until you can relate it to your preexisting knowledge.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 00:05:12
A common example I often see is  road closures by government which are not accurately represented in Google Maps.
Yesterday I came to a wedding party. The invitation contains a QR-code showing the location which can be traced in Google Maps. Due to traffic jam, it recommended to take an alternative route. I didn't expect that it brought us to cross a flooded road.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 00:17:37
Here is another picture from the front.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 07:13:29
Interpretable deep learning uncovers cellular properties in label-free live cell images that are predictive of highly metastatic melanoma.
Quote
Summary
Deep learning has emerged as the technique of choice for identifying hidden patterns in cell imaging data but is often criticized as “black box.” Here, we employ a generative neural network in combination with supervised machine learning to classify patient-derived melanoma xenografts as “efficient” or “inefficient” metastatic, validate predictions regarding melanoma cell lines with unknown metastatic efficiency in mouse xenografts, and use the network to generate in silico cell images that amplify the critical predictive cell properties. These exaggerated images unveiled pseudopodial extensions and increased light scattering as hallmark properties of metastatic cells. We validated this interpretation using live cells spontaneously transitioning between states indicative of low and high metastatic efficiency. This study illustrates how the application of artificial intelligence can support the identification of cellular properties that are predictive of complex phenotypes and integrated cell functions but are too subtle to be identified in the raw imagery by a human expert. A record of this paper’s transparent peer review process is included in the supplemental information.
https://www.sciencedirect.com/science/article/pii/S2405471221001587
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 07:16:02
https://scitechdaily.com/whos-to-die-and-whos-to-live-mechanical-cue-is-at-the-origin-of-cell-death-decision/

Quote
In past studies, researchers have found that C. elegans gonads generate more germ cells than needed and that only half of them grow to become oocytes, while the rest shrinks and die by physiological apoptosis, a programmed cell death that occurs in multicellular organisms. Now, scientists from the Biotechnology Center of the TU Dresden (BIOTEC), the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), the Cluster of Excellence Physics of Life (PoL) at the TU Dresden, the Max Planck Institute for the Physics of Complex Systems (MPI-PKS), the Flatiron Institute, NY, and the University of California, Berkeley, found evidence to answer the question of what triggers this cell fate decision between life and death in the germline.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/06/2021 12:24:47
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Here is another example. Someone gives us a message, 11001010.
There are many ways to interpret this. It could be a decimal number, or other base number such as hexadecimal or binary. Even in binary, we can treat it as signed or unsigned. Some of the bits can be a start bit, stop bit, or parity bit.
It could be treated as binary coded decimal.
It could also be a Morse code.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/06/2021 14:21:42
Due to traffic jam, it recommended to take an alternative route.
A common way to reduce traffic jam is by applying odd-even rule. On odd dates, only vehicles with odd plate number are allowed to pass, and vice versa. Assuming that the plate numbers are generally  assigned consecutively, the least significant bit suddenly becomes the most important bit.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/06/2021 17:10:28
In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
The number of bit specifies the quantity of information. Its conformity with objective reality as the ground truth specifies the quality of the information. Those concepts are similar to precision and accuracy, respectively.
Previously, I've created a thread specifically discussing about accuracy and precision from a practical perspective. I tried to quantify the data quality and quantity to be used in a database system that virtualize plant operations to make them more manageable. I wanted to use the most general forms as possible so they can be used flexibly for wide range of applications. Perhaps my approach was considered unconventional that it should be put in new theory section.
In measurement problems, our results are compared to a unit of measurement, and expressed in a number. The value may be accompanied by tolerance or quantization of uncertainty, due to the measurement methods or some unpredictable external factors. We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 09/06/2021 21:17:38

Hello Yusuf!
🙏

I am quite interested & enthusiastic about this particular Subject.
👍
But surely Not as much as You are.

Just wanted to say, this OP is quite a Good Read for anyone who's interested on the Topic.
👌


P.S. - Rather than googling for similar articles, I'd just visit in here n read it back to back.
👍
You ' Quote ' information, also provide Official Links for further details & post Images too.
😇
Very Nice & Good Work!
✌️
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 00:19:11
Hi Zer0. Thank you for your kind words. I really appreciate it. It gives me a positive feedback that I am going to the right direction.
I also appreciate some negative feedbacks to let me know if I made mistakes or misunderstand some concepts. They could help me avoid further mistakes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 11:09:03
Autonomous flying vehicles in 'smart cities' - NASA working on infrastructure


Quote
Data and Reasoning Fabric (DRF) could one day "assemble and provide useful information to autonomous vehicles in real time. The information system is being developed by NASA.

Credit: NASA
Here is the latest development of shared virtual universe among autonomous vehicles. It's a step closer toward a unified virtual universe that is the idea behind this thread, although it's usage is still limited to autonomous vehicles only. The next step would be integration between this system with other virtualization systems already established, such as governments and corporations.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 22:40:39
https://venturebeat.com/2021/06/09/deepmind-says-reinforcement-learning-is-enough-to-reach-general-ai/

Quote
In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

...

In the race of developing AI, besides of hardware capacity, the results depend on the choosing of reward function. It's like choosing the instrumental goals which are aligned with the terminal goals. The natural long term reward is survival. Nature also provides short term reward function through pleasure and pain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/06/2021 08:08:45
"Why Is Quantum Computing So Hard to Explain | Quanta Magazine" https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/
Quote
Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.

In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.

Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
Quote
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 07:17:19
There's Plenty Moore Room: IBM's New 2nm CPU
Quote
People talk about the death of semiconductors being able to shrink. IBM is laughing in your face - there's plenty of room, and plenty of density, and they've developed a proof of concept to showcase where the technology can go. Here's a look at IBM's new 2nm silicon.

Intro

0:00 The Future in 2024
0:26 What Nanometers Really Mean
3:05 Transistor Density
4:02 IBM on 2nm
5:38 Comparing against current nodes
7:00 What's on the chip
7:40 Gate-All-Around Nanosheets
8:45 Albany, NY
9:16 Performance of 2nm
9:42 Coming to Market and Pathfinding
11:06 EUV and Future of EUV (Jim Keller)
14:12 Minimum Specification: Bite a Wafer
14:39 Cat Tax
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 16:18:10
We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.



The smallest base number for numerical writings is 2. That's why most computer are binary system. For human machine interface such as programming languages, some extension of binary code are often useful, such as octal, hexadecimal, or BCD.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 22:06:06
Quote
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

"Cybersecurity experts face a new challenge: AI capable of tricking them" https://www.inputmag.com/culture/cybersecurity-experts-face-a-new-challenge-ai-capable-of-tricking-them/amp

Quote
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 23:26:49
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
This brings us back to the question about accuracy and precision of our information sources. Here are definitions of precision by the dictionary.
Quote
the quality, condition, or fact of being exact and accurate.
"the deal was planned and executed with military precision"

TECHNICAL
refinement in a measurement, calculation, or specification, especially as represented by the number of digits given.
"a precision of six decimal figures"
And here are the definitions of accuracy.
Quote
the quality or state of being correct or precise.
"we have confidence in the accuracy of the statistics"

TECHNICAL
the degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard.
"the accuracy of radiocarbon dating"

We can see here that in general definition, the meanings of precision and accuracy are mixed. While in technical definition, it's restricted to numeric writing, especially in decimal based number. We can quickly realize that those definitions can't cover all kinds of usage of the word.

In technical usage, non-number information can't be described. For example, Alice is going to Japan. It would be more precise if it's said that she's going to Tokyo. Even more precise if the district or even the complete address were given. But if it turns out that she's going to Kyoto instead of Tokyo, then the previous information about the destination city is not accurate, although still more precise than just the destination country.

Expression of the same numeric value but in different base number would give us different precision.

In general usage, it should be possible to express information with high precision independently from accuracy. There are accurate but imprecise information. On the other hand, there are also precise but inaccurate information.

This video tries to distinct them.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/06/2021 08:59:01
Voluntarist Epistemology

This video also contains an example of balancing between accuracy and precision, especially from 26:40 to 31:26

Quote
According to Bas van Fraassen's voluntarist epistemology, the only constraint on rational belief is consistency. Beyond this, our beliefs must be guided not by rules of reason, but by the passions: emotions, values, and intuitions. This video examines the grounds for voluntarism in the failure of traditional epistemology, and in the need for an epistemology that can properly accommodate conceptual revolutions. Then I turn to the objections to voluntarism.

Outline of voluntarism:
0:00 - Introduction
4:02 - Why consistency?
8:13 - Failure of traditional epistemology
18:37 - Voluntarism against skepticism
31:26 - Conceptual revolution and objectifying epistemology
Objections to voluntarism:
48:38 - Arbitrariness
53:00 - Too permissive?
1:01:34 - Too conservative?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/06/2021 12:46:36
Expression of the same numeric value but in different base number would give us different precision.
Since binary is the smallest base number, it would be preferred to express precision.  So, the precision of an information depends on how many bits its content is.
In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.

https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5858d28deea4237a7c1320f7e649fb104aecb0e5)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/908c155d6002beadf2df5a7c05e954ec2373ca16)

https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/IEEE_754_Double_Floating_Point_Format.svg/618px-IEEE_754_Double_Floating_Point_Format.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/61345d47f069d645947b9c0ab676c75551f1b188)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5f677b27f52fcd521355049a560d53b5c01800e1)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 05:17:42
The Longest DNA in the Animal Kingdom Found - Not What I Expected

DNA is the largest information storage method provided by nature. Studying how it works is highly important.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 09:43:31
Expression of the same numeric value but in different base number would give us different precision.
Since binary is the smallest base number, it would be preferred to express precision.  So, the precision of an information depends on how many bits its content is.
In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.

https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5858d28deea4237a7c1320f7e649fb104aecb0e5)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/908c155d6002beadf2df5a7c05e954ec2373ca16)

https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/IEEE_754_Double_Floating_Point_Format.svg/618px-IEEE_754_Double_Floating_Point_Format.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/61345d47f069d645947b9c0ab676c75551f1b188)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5f677b27f52fcd521355049a560d53b5c01800e1)
It's clear that bits in different positions in the floating point representation have different significance in determining the numeric value of the data. The significance of the bit can be defined as the difference of the data value caused by its flipping between 0 and 1. In general, they are sorted from highest to lowest significance (from left to right position in writing); except for sign bit, whose significance depends on the value determined by other bits. If it's small, then the sign bit has low significance. On the other hand, if the value from other bits is big, the sign bit has high significance.
 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 21:55:45
In real life experience, we often get/use numerical information with even lower precision than what's expressed by single precision floating point. In many applications, it's enough to write π as 3.14.
In floating point representation, 3 digit of decimal number can be written using 10 bits of fraction part. The rest of the bits are rounded to 0, whose actual value we don't care.
By defining precision as quantity of information, we can use it in numeric as well non-numeric data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 23:25:53
As I mentioned earlier,  actual/practical precision of an information also depends on the assumptions assigned to it. For example, if I say that your car key is in Waldo's pocket, you would be able to quickly find it, as long as you can find Waldo first. In this case, my explicit statement only contains a few bits of information. But it can become highly precise when it's combined with correct assumptions not expressed in my statement. Like which Waldo I'm talking about.
Another example, if I say that the value of x equals 2π, modern people would recognize it with very high precision. It's because the symbols carry almost unambiguous meaning in modern world. It would be different in ancient times.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 05:37:56
The next problem is the accuracy of the information. Let's start with a non-numeric case, such as finding Waldo in a picture.
(https://fiverr-res.cloudinary.com/images/q_auto,f_auto/gigs/140639081/original/7e7a04151cd0f368c6d56e4fd7abf5d02897b4e4/find-wally-or-waldo-for-you.jpg)

Saying that Waldo is in the picture is accurate, but not precise.
Saying that Waldo is at the bottom right corner of the picture is more precise, but not accurate.
Saying that Waldo is around the center of the picture, not far away from the red tent is more accurate and precise.

The first and third statements are accurate because they include the true value of Waldo's position.
The second statement becomes inaccurate because it excludes the true value of Waldo's position.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 05:43:49
The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust
Quote
The Cerebras Wafer-Scale Engine is 8.5 inches wide and contains 1.2 trillion transistors. The next biggest chip, the NVIDIA A100 GPU, measures one inch at a time and has only 54 billion transistor. The WSE has made its way into a handful of supercomputing labs, including the National Energy Technology Laboratory. Researchers pitted the chip against a supercomputer in a fluid dynamics simulation and found it to be faster than the supercomputer. The team said that the chip completed a combustion simulation in a power plant approximately 200 times faster.

Joule is the 81st fastest supercomputer in the world, with a price tag of $1.2 billion. The WSE is bigger than the average supercomputer, and it's all about design. The company uses couriers to send and collect documents from other branches and archives across the city. It's like an old-fashioned company doing all its business on paper, but on silicon wafers, and the process takes place within a silicon wafer, not a sheet of paper. The CS-1 is the world's largest supercomputer.

Cerebras has developed a chip that can handle problems small enough to fit on a wafer. The megachip is far more efficient than a traditional supercomputer that needs a ton of traditional chips to be networked. Next-generation chip will have 2,6 trillion transistors, 850,00 cores, and more than double the memory. It still remains to be seen whether wafer-scale computing really does take off, but Cerebras is the first to seriously pursue it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 06:16:37
The next problem is the accuracy of the information. Let's start with a non-numeric case, such as finding Waldo in a picture.
Unlike precision, which can be determined without knowing the true value of the information, accuracy cannot be determined without knowing the true value of the information.
Saying that π is more than 0 is accurate because it doesn't contain false information. But saying that it's less than 3.141 is not accurate because it contains false information.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/06/2021 10:11:48
Precision of an information should be considered as the amount of uncertainty that it can remove. Number of bits alone is not adequate.
Here is an example.
Many bits in first statement don't  remove more uncertainty compared to fewer bits in the second statement. So, we can't say that the first statement has higher precision than the second, although it contains many more bits. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/06/2021 11:02:48
The World’s Most Powerful Supercomputer Is Almost Here

Quote
The next generation of computing is on the horizon, and several new machines may just smash all the records...with two nations neck and neck in a race to get there first.

The ENIAC was capable of about 400 FLOPS. FLOPS stands for floating-point operations per second, which basically tells us how many calculations the computer can do per second. This makes measuring FLOPS a way of calculating computing power.

So, the ENIAC was sitting at 400 FLOPS in 1945, and in the ten years it was operational, it may have performed more calculations than all of humanity had up until that point in time—that was the kind of leap digital computing gave us. From that 400 FLOPS we upgraded to 10,000 FLOPS, and then a million, a billion, a trillion, a quadrillion FLOPS. That’s petascale computing, and that’s the level of today’s most powerful supercomputers.

But what’s coming next is exascale computing. That’s zeroes. 1 quintillion operations per second. Exascale computers will be a thousand times better performing than the petascale machines we have now. Or, to put it another way, if you wanted to do the same number of calculations that an exascale computer can do in ONE second...you’d be doing math for over 31 billion years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/06/2021 03:16:37
The virtual universe is useless unless it can be expressed into actions in objective reality. 3D printing improves the interface between those two universes.


Quote
Three-dimensional printing promises new opportunities for more sustainable and local production. But does 3D printing make everything better? This film shows how innovation can change the world of goods.

Is the way we make things about to become the next revolution? Traditional manufacturing techniques like milling, casting and gluing could soon be replaced by 3D printing -saving enormous amounts of material and energy. Aircraft maker Airbus is already benefiting from the new manufacturing method. Beginning this year, the A350 airliner will fly with printed door locking shafts. Where previously ten parts had to be installed, today that’s down to just one. It saves a lot of manufacturing steps. And 3D printing can imitate nature's efficient construction processes, something barely possible in conventional manufacturing. Another benefit of the new technology is that components can become significantly lighter and more robust, and material can be saved during production. But the Airbus development team is not yet satisfied. The printed cabin partition in the A350 has become 45 percent lighter thanks to the new structure, but it is complex and expensive to manufacture. It takes 900 hours to print just one partition, a problem that print manufacturers have not yet been able to solve. The technology is already being used in Adidas shoes: The sportswear company says it is currently the world’s largest manufacturer of 3D-printed components. The next step is sustainable materials, such as biological synthetic resins that do not use petroleum and can be liquefied again without loss of quality and are therefore completely recyclable. This documentary sheds light on the diverse uses of 3D printing.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 22/06/2021 06:02:32
About as far away as when we first started. As one diode said to the other we have been together for so long and I still don't know you.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/06/2021 10:50:05
Precision of an information should be considered as the amount of uncertainty that it can remove. Number of bits alone is not adequate.
Here is an example.
  • 2.99999999... ≤ π ≤3.9999999...
  • 3 ≤ π ≤ 4
Many bits in first statement don't  remove more uncertainty compared to fewer bits in the second statement. So, we can't say that the first statement has higher precision than the second, although it contains many more bits. 
It looks like the equation sign implicitly puts two limits at once, which are low and high limits of the value. When we say that two values are identical,  or exactly the same by definition, we can use  ≡ symbol. But if they are approximately equal, we use ≈ symbol. It means that we acknowledge that there are cases where the difference can't be neglected.

The usage of = symbol then leaves some ambiguity. The values involved in the equation are not necessarily identical, but  the difference between them must be negligible in almost all cases.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/06/2021 10:14:10
https://towardsdatascience.com/cant-access-gpt-3-here-s-gpt-j-its-open-source-cousin-8af86a638b11
Similar to GPT-3, and everyone can use it.

Quote

ARTIFICIAL INTELLIGENCE
Can’t Access GPT-3? Here’s GPT-J — Its Open-Source Cousin
Similar to GPT-3, and everyone can use it.

The AI world was thrilled when OpenAI released the beta API for GPT-3. It gave developers the chance to play with the amazing system and look for new exciting use cases. Yet, OpenAI decided not to open (pun intended) the API to everyone, but only to a selected group of people through a waitlist. If they were worried about the misuse and harmful outcomes, they’d have done the same as with GPT-2: not releasing it to the public at all.
It’s surprising that a company that claims its mission is “to ensure that artificial general intelligence benefits all of humanity” wouldn’t allow people to thoroughly investigate the system. That’s why we should appreciate the work of people like the team behind EleutherAI, a “collective of researchers working to open source AI research.” Because GPT-3 is so popular, they’ve been trying to replicate the versions of the model for everyone to use, aiming at building a system comparable to GPT-3-175B, the AI king. In this article, I’ll talk about EleutherAI and GPT-J, the open-source cousin of GPT-3. Enjoy!
Quote
GPT-J is 30 times smaller than GPT-3-175B. Despite the large difference, GPT-J produces better code, just because it was slightly more optimized to do the task. This implies that optimization towards improving specific abilities could give rise to systems that are way better than GPT-3. And this isn’t limited to coding: we could create for every task, a system that would top GPT-3 with ease. GPT-3 would become a jack of all trades, whereas the specialized systems would be the true masters.
This hypothesis goes in line with the results OpenAI researchers Irene Solaiman and Christy Dennison got from PALMS. They fine-tuned GPT-3 with a small curated dataset to prevent the system from producing biased outputs and got amazing results. In a way, it was an optimization; they specialized GPT-3 to be unbiased — as understood by ethical institutions in the U.S. It seems that GPT-3 isn’t only very powerful, but that a notable amount of power is still latent within, waiting to be exploited by specialization.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/06/2021 13:11:13
GPT-J is 30 times smaller than GPT-3-175B. Despite the large difference, GPT-J produces better code, just because it was slightly more optimized to do the task. This implies that optimization towards improving specific abilities could give rise to systems that are way better than GPT-3.
It looks like the way to general intelligence is by combining several neural networks trained separately for specific tasks. A dedicated network would be needed to determine which part would be suitable to solve the problem at hand.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/06/2021 21:44:21
What he's trying to build is basically similar to a virtual universe. Note that this video was uploaded 7 years ago.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/07/2021 11:22:31
https://www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code

Quote
GitHub and OpenAI have launched a technical preview of a new AI tool called Copilot, which lives inside the Visual Studio Code editor and autocompletes code snippets.

Copilot does more than just parrot back code it’s seen before, according to GitHub. It instead analyzes the code you’ve already written and generates new matching code, including specific functions that were previously called. Examples on the project’s website include automatically writing the code to import tweets, draw a scatterplot, or grab a Goodreads rating.

Quote
GitHub sees this as an evolution of pair programming, where two coders will work on the same project to catch each others’ mistakes and speed up the development process. With Copilot, one of those coders is virtual.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/07/2021 03:45:23
https://jrodthoughts.medium.com/objects-that-sound-deepminds-research-show-how-to-combine-vision-and-audio-in-a-single-model-c4051ea21495
Quote

Since we are babies, we intuitively develop the ability to correlate the input from different cognitive sensors such as vision, audio and text. While listening to a symphony we immediately visualize an orchestra or when admiring a landscape painting, our brain associates the visual with specific sounds. The relationships between images, sounds and texts are dictated by connections between different sections of the brain responsible from analyzing specific cognitive input. In that sense, you can say that we are hardwired to learn simultaneously from multiple cognitive signals. Despite the advancements in different deep learning areas such as image, language and sound analysis, most neural networks remain specialized on a single input data type. A few years ago, researchers from Alphabet’s subsidiary DeepMind published a research paper proposing a method that can simultaneously analyze audio and visual inputs and learn the relationships between objects and sounds in a common environment.
(https://miro.medium.com/max/2100/1*hFzT9BNIL6FopN9tkch29w.png)
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 10:25:20
Does PlayStation count.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 13:00:11
Does PlayStation count.
It does help improving the technology and accumulating financial resources for that. Although their main purpose may not be directly correlated.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 13:03:48
Although their main purpose may not be directly correlated.
What about Xbox.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 13:12:26
Although their main purpose may not be directly correlated.
What about Xbox.
Same story.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 14:09:06
Although their main purpose may not be directly correlated.
I just thought of something. What if we are already in a virtual universe we will have to try and build a real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 15:12:21
Although their main purpose may not be directly correlated.
I just thought of something. What if we are already in a virtual universe we will have to try and build a real universe.
As long as we have no reliable way to proof otherwise, it's better for us to assume that we're living in reality. Descartes' Cogito tells us that our own consciousness is the only self evident proof of our existence.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 15:42:30
As long as we have no reliable way to proof otherwise, it's better for us to assume that we're living in reality. Descartes' Cogito tells us that our own consciousness is the only self evident proof of our existence.
I think it is safe to assume that our consciousness is merely a circuit board plugged into the motherboard that is programmed to make some decisions inside the virtual reality life that we only virtually think we have. I could be wrong but if I am then that would be a falt in the electronics of the virtual reality machine. eg. When I get a headache this can be due to computer overload.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 05:40:06
I think it is safe to assume that our consciousness is merely a circuit board plugged into the motherboard that is programmed to make some decisions inside the virtual reality life that we only virtually think we have. I could be wrong but if I am then that would be a falt in the electronics of the virtual reality machine. eg. When I get a headache this can be due to computer overload.
I don't think that you are safe thinking that way. Imagine you are a bit drunk on your bed, staring out of your window. You see an asteroid flying right in your direction. You're not sure if it's real or you're just dreaming, or you are just living in a simulation. There's apparently not enough time to determine which one is true.

The best bet is by assuming that it's real, and you should get out as fast as you can. Even if you're wrong, the result would be less detrimental than assuming otherwise.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 06:08:08
I don't think that you are safe thinking that way.
I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 07:46:29
I don't think that you are safe thinking that way.
I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
In my previous example I was thinking about a small asteroid capable of destroying a house.
A virtual universe, or even a nested virtual universe, eventually must be build upon a real universe. It's impossible for a virtual universe to exist when no real universe is there.
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 08:08:21
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 11:15:44
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 11:32:26
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
I see what you're saying but the flight simulator could be dangerous as the captain could spill hot coffee on his lap or even worse. He will learn not to do that in the real univers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 12:22:48
You can kill thousands of people in GTA or Total War without being evil in real life.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 12:37:34
You can kill thousands of people in GTA or Total War without being evil in real life.
I don't like violent games they incite violence in the real universe. But I do get your point thank you.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 12:48:24
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 13:30:05
Is it possible to build a virtual universe?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 14:56:36
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Any scalable virtual universe must be built as vectors or tensors instead of pixels, especially when it's multidimensional.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 15:05:06
Is it possible to build a virtual universe?
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 15:29:03
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/07/2021 19:47:02
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
What I meant was not about world simulation like Matrix the movie. They are more mundane and narrow purposed, such as Google earth, climate simulation, alphafold, Tesla's Dojo and vertical integration, Microsoft Flight Simulator, SAP ERP, Chinese government's surveillance system, Estonia's digital governance, financial/banking systems, crypto currency, Virtual Machines to manage workstations, etc. They try to represent some aspects of objective reality for easier access to extract information, aggregate and manage them, and help with decision making process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/07/2021 19:49:47
"Exclusive Q&A: Neuralink’s Quest to Beat the Speed of Type - IEEE Spectrum" https://spectrum.ieee.org/tech-talk/biomedical/bionics/exclusive-neuralinks-goal-of-bestinworld-bmi
Quote
Elon Musk’s brain tech company, Neuralink, is subject to rampant speculation and misunderstanding. Just start a Google search with the phrase “can Neuralink...” and you’ll see the questions that are commonly asked, which include “can Neuralink cure depression?” and “can Neuralink control you?” Musk hasn’t helped ground the company’s reputation in reality with his public statements, including his claim that the Neuralink device will one day enable “AI symbiosis” in which human brains will merge with artificial intelligence.

It’s all somewhat absurd, because the Neuralink brain implant is still an experimental device that hasn’t yet gotten approval for even the most basic clinical safety trial.

But behind the showmanship and hyperbole, the fact remains that Neuralink is staffed by serious scientists and engineers doing interesting research. The fully implantable brain-machine interface (BMI) they’ve been developing is advancing the field with its super-thin neural “threads” that can snake through brain tissue to pick up signals and its custom chips and electronics that can process data from more than 1000 electrodes.
Quote
IEEE Spectrum: Elon Musk often talks about the far-future possibilities of Neuralink; a future in which everyday people could get voluntary brain surgery and have Links implanted to augment their capabilities. But whom is the product for in the near term?

Joseph O’Doherty: We’re working on a communication prosthesis that would give back keyboard and mouse control to individuals with paralysis. We’re pushing towards an able-bodied typing rate, which is obviously a tall order. But that’s the goal.

We have a very capable device and we’re aware of the various algorithmic techniques that have been used by others. So we can apply best practices engineering to tighten up all the aspects. What it takes to make the BMI is a good recording device, but also real attention to detail in the decoder, because it’s a closed-loop system. You need to have attention to that closed-loop aspect of it for it to be really high performance.

We have an internal goal of trying to beat the world record in terms of information rate from the BMI. We’re extremely close to exceeding what, as far as we know, is the best performance. And then there’s an open question: How much further beyond that can we go?

My team and I are trying to meet that goal and beat the world record. We’ll either nail down what we can, or, if we can’t, figure out why not, and how to make the device better.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 07/07/2021 00:10:09
Thank you my friend that is very interesting information I think medical science and I.T is making great progress we will have to see what the future holds.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/07/2021 05:20:27
https://venturebeat.com/2021/07/05/the-future-of-deep-learning-according-to-its-pioneers/
Quote
In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning.

Titled “Deep Learning for AI,” the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems.

Quote
In their paper, Bengio, Hinton, and LeCun acknowledge these shortcomings. “Supervised learning, while successful in a wide variety of tasks, typically requires a large amount of human-labeled data. Similarly, when reinforcement learning is based only on rewards, it requires a very large number of interactions,” they write.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 07/07/2021 14:21:48
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/07/2021 10:51:06

Moore's Law is dead, right? Not if we can get working photonic computers.

Lightmatter is building a photonic computer for the biggest growth area in computing right now, and according to CEO Nick Harris, it can be ordered now and will ship at the end of this year. It's already much faster than traditional electronic computers a neural nets, machine learning for language processing, and AI for self-driving cars.

It's the world's first general purpose photonic AI accelerator, and with light multiplexing -- using up to 64 different colors of light simultaneously -- there's long path of speed improvements ahead.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/07/2021 10:59:41
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
At first the interface would likely be similar to currently existing human-machine interfaces, such as monitor, camera, keyboard, mouse, touchscreen, speaker, microphone, VR and AR. But eventually, as direct brain interface gets better and reliable, those devices will be slowly replaced due to their speed limitation which will become a communication bottleneck.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 08/07/2021 11:11:13
At first the interface would likely be similar to currently existing human-machine interfaces,
Thank you for the info hamdani, Looks like good things on the way. We will be like kids again.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/07/2021 05:56:58
At first the interface would likely be similar to currently existing human-machine interfaces,
Thank you for the info hamdani, Looks like good things on the way. We will be like kids again.
Some parts of the virtual universe would be intended to represent objective reality as it is, as acurate and precise as possible. The other parts would try to simulate as much as possible the consequences of our decisions, to try to achieve best case and avoid worst case scenario. It's similar to the mind of chess players who memorize current position while figuring out their possible next moves and their opponents' replies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/07/2021 09:44:01
Quote
Scientists have made great progress to decode thoughts with artificial intelligence. In this video I summarize the most exciting recent developments.

The first paper about inferring the meaning of nouns is:

Mitchell et el
"Predicting Human Brain Activity Associated with the Meanings of Nouns"
Science, 1191-1195 (2008)
https://science.sciencemag.org/content/320/5880/1191

The paper about extracting speech from brain readings is:

Anumanchipalli, Chartier, & Chang
"Speech synthesis from neural decoding of spoken sentences"
Nature 568, 493–498 (2019)
https://www.nature.com/articles/s41586-019-1119-1?fbclid=IwAR0yFax5f_drEkQwOImIWKwCE-xdglWzL8NJv2UN22vjGGh4cMxNqewWVSo

There are more examples of the reconstructed sentences here:

https://www.ucsf.edu/news/2019/04/414296/synthetic-speech-generated-brain-recordings

The paper about extracting images from brain readings is:
Shen et al
PLoS Comput Biol. 15(1): e1006633 (2019)
https://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1006633

And the brain to text paper using handwriting is:

Willett et al
High-performance brain-to-text communication via handwriting
Nature 593, 249–254 (2021)
https://www.nature.com/articles/s41586-021-03506-2

0:00 Intro
0:33 How to measure brain activity
2:44 Brain to Word
5:42 Brain to Image
6:30 Brain to Speech
7:25 Brain to Text
8:29 Better ways to measure brain activity
10:20 Sponsor Message
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/07/2021 06:47:42
And this video shows how our model of reality can affect our decisions, with consequences that we will get in the future.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/07/2021 06:51:03

Is artificial intelligence replacing lawyers and judges? Throwback to Ronny Chieng’s report on how robots are taking over the legal system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/07/2021 10:03:12
Quote
The government is introducing what it terms the 'social credit score scheme' in Hangzhou, China. The system will monitor everything from traffic offenses to how people handle their parents. It is currently being piloted in the eastern provincial capital of Hangzhou but has not yet been implemented. The government uses blacklists to limit people's actions or to refuse them such programs. The structure could create all sorts of rifts between neighbors, employers, and even mates.

Social feedback results would come in part from 'residential committees' responsible for tracking and documenting people's behavior. Social credit ratings were already rolled out in 2020 and now due to events of the recent year have only accelerated its widespread adoption. It remains to be seen if the fear of a low score would be enough to alter people's actions outside of limiting travel, regardless of government databases.
For these scenario to be successful and sustainable, the government as well as the people need to understand the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/07/2021 07:27:43
https://tech.fb.com/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech-communication/

Quote
Tags: ARaugmented realityBCIbrain-computer interfaceFacebook Reality LabsFRLhardwareUCSF
 
TL;DR: Today, we’re excited to celebrate milestone new results published by our UCSF research collaborators in The New England Journal of Medicine, demonstrating the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting to speak. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the vocal tract. This study marks an important milestone for the field of neuroscience, and it concludes Facebook’s years-long collaboration with UCSF’s Chang Lab.

These groundbreaking results show what’s possible — both in clinical settings like Chang Lab, and potentially for non-invasive consumer applications such as the optical BCI we’ve been exploring over the past four years.

To continue fostering optical BCI explorations across the field, we want to take this opportunity to open source our BCI software and share our head-mounted hardware prototypes to key researchers and other peers to help advance this important work. In the meantime, Facebook Reality Labs will focus on applying BCI concepts to our electromyography (EMG) research to dramatically accelerate wrist-based neural interfaces for intuitive AR/VR input.

The room was full of UCSF scientists and equipment — monitors and cables everywhere. But his eyes were fixed on a single screen displaying two simple words: “Good morning!”

Though unable to speak, he attempted to respond, and the word “Hello” appeared.

The screen went black, replaced by another conversational prompt: “How are you today?”

This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.

A simple conversation, yet it amounted to a significant milestone in the field of neuroscience. More importantly, it was the first time in over 16 years that he’d been able to communicate without having to use a cumbersome head-mounted apparatus to type out what he wanted to say, after experiencing near full paralysis of his limbs and vocal tract following a series of strokes. Now he simply had to attempt speaking, and a computer could share those words in real time — no typing required.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 15/07/2021 12:29:22
This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.
That is amazing outright mindreading in action.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/07/2021 01:21:35
That is amazing outright mindreading in action.
When the technology is refined, it can revolutionize our communication. All conversation in this thread would be finished in just a few seconds.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/07/2021 01:28:39
https://scitechdaily.com/the-virus-trap-hollow-nano-objects-made-of-dna-could-trap-viruses-and-render-them-harmless/
Quote
To date, there are no effective antidotes against most virus infections. An interdisciplinary research team at the Technical University of Munich (TUM) has now developed a new approach: they engulf and neutralize viruses with nano-capsules tailored from genetic material using the DNA origami method. The strategy has already been tested against hepatitis and adeno-associated viruses in cell cultures. It may also prove successful against coronaviruses.

There are antibiotics against dangerous bacteria, but few antidotes to treat acute viral infections. Some infections can be prevented by vaccination but developing new vaccines is a long and laborious process.

Now an interdisciplinary research team from the Technical University of Munich, the Helmholtz Zentrum München, and the Brandeis University (USA) is proposing a novel strategy for the treatment of acute viral infections: The team has developed nanostructures made of DNA, the substance that makes up our genetic material, that can trap viruses and render them harmless.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/07/2021 07:59:16
Why “probability of 0” does not mean “impossible” | Probabilities of probabilities, part 2

Quote
Curious about measure theory?  This does require some background in real analysis, but if you want to dig in, here is a textbook by the always great Terence Tao.
https://terrytao.files.wordpress.com/...

Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 20/07/2021 18:54:58
Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
I found the video very difficult to understand As my brain is not wired for this logic. I can understand simple statistics and likelihoods as with the coin flip my way of seeing this is that the likelihood hood of the coin landing on the same side 10x is 1 in 1,024 ore the likelihood of the coin landing 5 up and 5 down is 50 50. The likelihood is a simple satistical chance and is by no means a constant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/07/2021 08:35:59
Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
I found the video very difficult to understand As my brain is not wired for this logic. I can understand simple statistics and likelihoods as with the coin flip my way of seeing this is that the likelihood hood of the coin landing on the same side 10x is 1 in 1,024 ore the likelihood of the coin landing 5 up and 5 down is 50 50. The likelihood is a simple satistical chance and is by no means a constant.
Try this.
https://www.omnicalculator.com/statistics/coin-flip-probability
(https://www.thenakedscientists.com/forum/index.php?action=dlattach;topic=77747.0;attach=32208)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/07/2021 05:56:12
https://scitechdaily.com/deepmind-releases-accurate-picture-of-the-human-proteome-the-most-significant-contribution-ai-has-made-to-advancing-scientific-knowledge-to-date/
Quote
DeepMind and EMBL release the most complete database of predicted 3D structures of human proteins.

Partners use AlphaFold, the AI system recognized last year as a solution to the protein structure prediction problem, to release more than 350,000 protein structure predictions including the entire human proteome to the scientific community.

DeepMind today announced its partnership with the European Molecular Biology Laboratory (EMBL), Europe’s flagship laboratory for the life sciences, to make the most complete and accurate database yet of predicted protein structure models for the human proteome. This will cover all ~20,000 proteins expressed by the human genome, and the data will be freely and openly available to the scientific community. The database and artificial intelligence system provide structural biologists with powerful new tools for examining a protein’s three-dimensional structure, and offer a treasure trove of data that could unlock future advances and herald a new era for AI-enabled biology.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/07/2021 06:16:59
https://www.bbc.co.uk/news/technology-57942909
Quote
Mark Zuckerberg has laid out his vision to transform Facebook from a social media network into a “metaverse company” in the next five years.

A metaverse is an online world where people can game, work and communicate in a virtual environment, often using VR headsets.

The Facebook CEO described it as “an embodied internet where instead of just viewing content - you are in it”.
It looks like it's closer than many of us are thinking.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/07/2021 12:59:23
https://neurosciencenews.com/aging-junk-dna-18975/
Potential Role of ‘Junk DNA’ Sequence in Aging and Cancer Identified
Quote
Summary: VNTR2-1, a recently identified region of DNA, appears to drive the activity of the telomerase gene. The telomerase gene has previously been found to prevent aging in specific cells.

Source: Washington State University
Quote
The telomerase gene controls the activity of the telomerase enzyme, which helps produce telomeres, the caps at the end of each strand of DNA that protect the chromosomes within our cells. In normal cells, the length of telomeres gets a little bit shorter every time cells duplicate their DNA before they divide. When telomeres get too short, cells can no longer reproduce, causing them to age and die.

However, in certain cell types–including reproductive cells and cancer cells–the activity of the telomerase gene ensures that telomeres are reset to the same length when DNA is copied. This is essentially what restarts the aging clock in new offspring but is also the reason why cancer cells can continue to multiply and form tumors.

Knowing how the telomerase gene is regulated and activated and why it is only active in certain types of cells could someday be the key to understanding how humans age, as well as how to stop the spread of cancer. That is why Zhu has focused the past 20 years of his career as a scientist solely on the study of this gene.

Zhu said that his team’s latest finding that VNTR2-1 helps to drive the activity of the telomerase gene is especially notable because of the type of DNA sequence it represents.

“Almost 50% of our genome consists of repetitive DNA that does not code for protein,” Zhu said. “These DNA sequences tend to be considered as ‘junk DNA’ or dark matters in our genome, and they are difficult to study. Our study describes that one of those units actually has a function in that it enhances the activity of the telomerase gene.”

Their finding is based on a series of experiments that found that deleting the DNA sequence from cancer cells–both in a human cell line and in mice–caused telomeres to shorten, cells to age, and tumors to stop growing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2021 13:50:37
An article by an AI about AIs writing articles
(https://pbs.twimg.com/media/E7U6tdcX0AMAFb9?format=jpg&name=large)
https://twitter.com/Sentdex/status/1420105928775503882?s=20
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/08/2021 18:09:43
When I Googled articles for virtual universe, I get these.

https://www.nature.com/articles/509170a
Quote
A numerical simulation of cosmic structure formation reproduces both large- and smaller-scale features of a representative volume of the Universe from early in its history to the present day. See Article p.177

Perhaps the greatest triumph of modern cosmology is that a model with only six parameters can explain the vast majority of observational data from the first minutes of the Universe to the present day1. This standard model posits that 95% of the Universe today is composed of enigmatic 'dark matter' and 'dark energy'. Paradoxically, modelling the dynamics of the remaining 5% — normal, 'baryonic' matter — has proved to be the more challenging task. On page 177 of this issue, Vogelsberger et al.2 describe a numerical simulation of the formation of cosmic structure that captures both the large-scale distribution of baryonic material and its properties in individual galactic systems through cosmic time.

https://en.wikipedia.org/wiki/Virtual_world
Quote
A virtual world (also called a virtual space) is a computer-simulated environment[1] which may be populated by many users who can create a personal avatar, and simultaneously and independently explore the virtual world, participate in its activities and communicate with others.[2] These avatars can be textual,[3] graphical representations, or live video avatars with auditory and touch sensations.[4][5]

The user accesses a computer-simulated world which presents perceptual stimuli to the user, who in turn can manipulate elements of the modeled world and thus experience a degree of presence.[6] Such modeled worlds and their rules may draw from reality or fantasy worlds. Example rules are gravity, topography, locomotion, real-time actions, and communication. Communication between users can range from text, graphical icons, visual gesture, sound, and rarely, forms using touch, voice command, and balance senses.

http://spaceengine.org/
Quote
SpaceEngine is a realistic virtual Universe you can explore on your computer. You can travel from star to star, from galaxy to galaxy, landing on any planet, moon, or asteroid with the ability to explore its alien landscape. You can alter the speed of time and observe any celestial phenomena you please. All transitions are completely seamless, and this virtual universe has a size of billions of light-years across and contains trillions upon trillions of planetary systems. The procedural generation is based on real scientific knowledge, so SpaceEngine depicts the universe the way it is thought to be by modern science. Real celestial objects are also present if you want to visit them, including the planets and moons of our Solar system, thousands of nearby stars with newly discovered exoplanets, and thousands of galaxies that are currently known.

They seem to focus on simulating the objective reality with precision and accuracy as their highest priorities, respectively. It seems like there is something more important still missing, if we treat the virtual universe as a tool to help us achieve the universal terminal goal, as what this thread was intended to do. That thing is relevance.

Let's say that someday, somehow we can create a detailed and accurate simulation of a distant planet that we can't reach and won't affect us in foreseeable future. Then the resources used to create that simulation would be better off used for simulation of other parts of the universe which are more relevant in achieving the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 11:57:40
A virtual universe doesn't have to cover the whole universe. A small part of it is enough. The bare minimum is that something is used to represent a characteristic or property of something else.

The virtual universe itself can be characterized in 3 criteria: precision, accuracy, and relevance.

 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 12:04:22
In general form, virtual universe can differentiate the conscious from non-conscious entities. For example, smart cars vs dumb cars.
As I described earlier, the consciousness here means the ability of a system to determine its own future. That's the definition which is most relevant to the universal terminal goal as the main subject of my thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 12:58:57
Our mental map of our surroundings is a form of virtual universe. Simpler organisms also have simpler version of virtual universe. Among unicellular organisms, CRISPR system as defense mechanism can be seen as an outstanding example of virtual universe. They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have. At a glance, it may look costly. But it turns out that the benefits outweigh the costs.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 23:07:40
They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have.
It reminds us that a virtual universe doesn't have to be in electronic form. They can have the same materials as the system that's being represented. For example,
Quote
Nearly 8,000 miles from Osama bin Laden's lair, Navy Seal Team Six trained in a mock-up of the compound at a North Carolina Defense Department facility.
https://www.cnet.com/tech/services-and-software/bing-map-shows-cias-secret-bin-laden-compound-mock-up/

The advantages of the model are its accessibility, and safety for experiments to optimize planning, which means cost reduction in trials and errors.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/08/2021 16:31:57
Among unicellular organisms, CRISPR system as defense mechanism can be seen as an outstanding example of virtual universe. They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have. At a glance, it may look costly. But it turns out that the benefits outweigh the costs.
This case emphasizes the relevance of virtual universe. If something is important, you do it anyway even if it's hard, costly, or dangerous.
The bacteria doesn't seem to care how their environment looks like beyond a few microns from their current position. But they care about a chunk of virus DNA that can infect them, because it's relevant to their survival and existence in the future.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/08/2021 08:09:06
Let's say that you received a message from your friend that fajlusd is a phidgymb. This message is meaningless until you can relate those things to other things that you already know.

We accumulate knowledge by relating new information to the existing information we are already familiar with. Analogy is an example.

We build our virtual universe by adding new information and relate it to the existing ones. The first knowledge that we have is our own existence, as asserted by Decarte's cogito ergo sum. It becomes nucleation site of the virtual universe.

Written language has created a form of virtual universe outside of the mind of living organisms. Supercomputer servers in the clouds are just advanced version of it.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 04:38:08
OpenAI can translate English into code with its new machine learning software Codex
‘We see this as a tool to multiply programmers’
https://www.theverge.com/2021/8/10/22618128/openai-codex-natural-language-into-code-api-beta-access
Quote
AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding.

In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like “create a webpage with a menu on the side and title at the top,” and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible.

“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”

OpenAI used an earlier version of Codex to build a tool called Copilot for GitHub, a code repository owned by Microsoft, which is itself a close partner of OpenAI. Copilot is similar to the autocomplete tools found in Gmail, offering suggestions on how to finish lines of code as users type them out. OpenAI’s new version of Codex, though, is much more advanced and flexible, not just completing code, but creating it.

Codex is built on the top of GPT-3, OpenAI’s language generation model, which was trained on a sizable chunk of the internet, and as a result can generate and parse the written word in impressive ways. One application users found for GPT-3 was generating code, but Codex improves upon its predecessors’ abilities and is trained specifically on open-source code repositories scraped from the web.

Quote
“Sometimes it doesn’t quite know exactly what you’re asking,” laughs Brockman. He has a few more tries, then comes up with a command that works without this unwanted change. “So you had to think a little about what’s going on but not super deeply,” he says.

This is fine in our little demo, but it says a lot about the limitations of this sort of program. It’s not a magic genie that can read your brain, turning every command into flawless code — nor does OpenAI claim it is. Instead, it requires thought and a little trial and error to use. Codex won’t turn non-coders into expert programmers overnight, but it’s certainly much more accessible than any other programming language out there.

OpenAI is bullish about the potential of Codex to change programming and computing more generally. Brockman says it could help solve the programmer shortage in the US, while Zaremba sees it as the next step in the historical evolution of coding.

“What is happening with Codex has happened before a few times,” he says. In the early days of computing, programming was done by creating physical punch cards that had to be fed into machines, then people invented the first programming languages and began to refine these. “These programming languages, they started to resemble English, using vocabulary like ‘print’ or ‘exit’ and so more people became able to program.” The next part of this trajectory is doing away with specialized coding languages altogether and replacing it with English language commands.

“Each of these stages represents programming languages becoming more high level,” says Zaremba. “And we think Codex is bringing computers closer to humans, letting them speak English rather than machine code.” Codex itself can speak more than a dozen coding languages, including JavaScript, Go, Perl, PHP, Ruby, Swift, and TypeScript. It’s most proficient, though, in Python.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 05:13:09
Technological Singularity: An Impending "Intelligence Explosion"
We know it’s coming, but is it likely to happen soon?
https://interestingengineering.com/technological-singularity-an-impending-intelligence-explosion
Quote
In this century, humanity is predicted to undergo a transformative experience, the likes of which have not been seen since we first began to speak, fashion tools, and plant crops. This experience goes by various names - "Intelligence Explosion," "Accelerando," "Technological Singularity" - but they all have one thing in common.

They all come down to the hypothesis that accelerating change, technological progress, and knowledge will radically change humanity. In its various forms, this theory cites concepts like the iterative nature of technology, advances in computing, and historical instances where major innovations led to explosive growth in human societies.

Many proponents believe that this "explosion" or "acceleration" will take place sometime during the 21st century. While the specifics are subject to debate, there is general consensus among proponents that it will come down to developments in the fields of computing and artificial intelligence (AI), robotics, nanotechnology, and biotechnology.

In addition, there are differences in opinion as to how it will take place, whether it will be the result of ever-accelerating change, a runaway acceleration triggered by self-replicating and self-upgrading machines, an "intelligence explosion" caused by the birth of an advanced and independent AI, or the result of biotechnological augmentation and enhancement.

Opinions also differ on whether or not this will be felt as a sudden switch-like event or a gradual process spread out over time which might not have a definable beginning or inflection point. But either way, it is agreed that once the Singularity does occur, life will never be the same again. In this respect, the term "singularity" - which is usually used in the context of black holes - is quite apt because it too has an event horizon, a point in time where our capacity to understand its implications breaks down.
(https://inteng-storage.s3.amazonaws.com/img/iea/lV6DbMvDOx/sizes/paradigmshiftsfrr15eventssvg_resize_md.png)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 05:25:57
Quote
The use of the term "singularity" in this context first appeared in an article written by Stanislav Ulam about the life and accomplishments of John von Neumann. In the course of recounting opinions his friend held, Ulam described how the two talked at one point about accelerating change:

"One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which  human affairs, as we know them, could not continue."

However, the idea that humanity may one day achieve an "intelligence explosion" has some precedent that predates Ulam's description. Mahendra Prasad of UC Berkeley, for example, credits 18th-century mathematician Nicolas de Condorcet with making the first recorded prediction, as well as creating the first model for it.

In his essay, Sketch for a Historical Picture of the Progress of the Human Mind: Tenth Epoch (1794), de Condorcet expressed how knowledge acquisition, technological development, and human moral progress were subject to acceleration:

"How much greater would be the certainty, how much more vast the scheme of our hopes if... these natural [human] faculties themselves and this [human body] organization could also be improved?... The improvement of medical practice... will become more efficacious with the progress of reason...

"[W]e are bound to believe that the average length of human life will forever increase... May we not extend [our] hopes [of perfectibility] to the intellectual and moral faculties?... Is it not probable that education, in perfecting these qualities, will at the same time influence, modify, and perfect the [physical] organization?"
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 07:31:18
Data compression is at the heart of virtual universe.

Huffman Codes: An Information Theory Perspective

Quote
Huffman Codes are one of the most important discoveries in the field of data compression. When you first see them, they almost feel obvious in hindsight, mainly due to how simple and elegant the algorithm ends up being. But there's an underlying story of how they were discovered by Huffman and how he built the idea from early ideas in information theory that is often missed. This video is all about how information theory inspired the first algorithms in data compression, which later provided the groundwork for Huffman's landmark discovery.

0:00 Intro
2:02 Modeling Data Compression Problems
6:20 Measuring Information
8:14 Self-Information and Entropy
11:03 The Connection between Entropy and Compression
16:47 Shannon-Fano Coding
19:52 Huffman's Improvement
24:10 Huffman Coding Examples
26:10 Huffman Coding Implementation
27:08 Recap
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/08/2021 09:32:39
Is a Knowledge Graph capable of capturing human knowledge?
https://alessandro-negro.medium.com/is-a-knowledge-graph-capable-of-capturing-human-knowledge-8521162f06b2
Quote
In recent years Knowledge Graphs have been used to solve one of the biggest problems not only in machine learning but also in computer science in general: how to represent knowledge.
“Knowledge representation and reasoning is the area of Artificial Intelligence (AI) concerned with how knowledge can be represented symbolically and manipulated in an automated way by reasoning programs. More informally, it is the part of AI that is concerned with thinking, and how thinking contributes to intelligent behavior.” [Brachman and Levesque, 2004]
This aspect is critical since any “agent” — human, animal, electronic, mechanical, to behave intelligently, requires knowledge. Think about us as humans, for a very wide range of activities, we make decisions based on what we effortlessly and unconsciously know (or believe) about the world. Our [intelligent] behaviour is clearly conditioned, if not dominated, by knowledge.
Knowledge representation and reasoning focuses on the knowledge, not the knower. In this context, a graph based representation is becoming one of the most prominent approaches, thanks to its flexibility of representing concepts and relationships amongst them in a simple and generic data structure.
Quote
What is a Knowledge Graph?
For this question there is no gold standard, universally accepted definition, but my favorite is the one given by Gomez-Perez et al. [Gomez-Perez et al., 2020]:
“A knowledge graph consists of a set of interconnected typed entities and their attributes.”
According to this definition, the basic unit of a Knowledge Graph is the representation of an entity, such as a person, organization, or location, or perhaps a sporting event or a book or movie. Each entity might have various attributes. For a person, those attributes would include the name, address, birth date, and so on. Entities are connected to each other by relations: for example, a person works for a company, and a user likes a page or follows another user. Relations can also be used to bridge two separate Knowledge Graphs [Negro, 2021].

Quote
Conclusion
This blog post formally demonstrates how Knowledge Graphs are concretely capable of representing the knowledge available in multiple domains not only in a way that facilitates, at first glance, its exploration and navigation for analysts. The inherent structures and the forces that drive the connection among the entities in the graph coming from the related domain (in our example the biological rules) can be captured and analyzed also by artificial and autonomous agents. The classification represented here is just an example of how machine learning algorithms can be properly fed by graph in such a manner that would be impossible or very hard otherwise. In order to obtain the same accuracy we would have had to collect many common features related to each of the entities we wanted to classify.
It is worth noting here that this effort doesn’t go in the direction of replacing the human capability to analyze this knowledge but it is an empowerment. The capability of processing enormous amounts of data goes beyond human possibilities. Nevertheless, this is why machine learning has been introduced. In any case at the end of these processes, it is always human responsibility to evaluate the insights and, more in general, the results of this analysis and to make informed and wiser decisions based on them.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/08/2021 05:41:04
Self awareness is one criteria for consciousness. We can learn about it from something else similar to us, but simpler. Just like what's shown in the article below.

https://scitechdaily.com/human-brain-organoids-grown-in-lab-with-eyes-that-respond-to-light/
Quote
Human induced pluripotent stem cells (iPSCs) can be used to generate brain organoids containing an eye structure called the optic cup, according to a study published on August 17, 2021, in the journal Cell Stem Cell. The organoids spontaneously developed bilaterally symmetric optic cups from the front of the brain-like region, demonstrating the intrinsic self-patterning ability of iPSCs in a highly complex biological process.

“Our work highlights the remarkable ability of brain organoids to generate primitive sensory structures that are light sensitive and harbor cell types similar to those found in the body,” says senior study author Jay Gopalakrishnan of University Hospital Düsseldorf. “These organoids can help to study brain-eye interactions during embryo development, model congenital retinal disorders, and generate patient-specific retinal cell types for personalized drug testing and transplantation therapies.”
(https://scitechdaily.com/images/Brain-Organoid-With-Optic-Cups-768x681.jpg)

(https://scitechdaily.com/images/Development-of-Brain-Organoid-With-Optic-Cups-768x645.jpg)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/08/2021 06:05:56

https://jrodthoughts.medium.com/deepminds-idea-to-build-neural-networks-that-can-replay-past-experiences-just-like-humans-do-f9d7721473ac
Quote
DeepMind’s Idea to Build Neural Networks that can Replay Past Experiences Just Like Humans Do
DeepMind researchers created a model to be able to replay past experiences in a way that simulate the mechanisms in the hippocampus.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/08/2021 04:44:25
I think this is a major milestone in our efforts of building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2021 09:04:18
https://www.digitaltrends.com/computing/ai-leading-chip-design-revolution/
A.I. is leading a chip design revolution, and it’s only just getting started
Quote
For decades, constant innovation in the world of semiconductor chip design has made processors faster, more efficient, and easier to produce. Artificial intelligence (A.I.) is leading the next wave of innovation, trimming the chip design process from years to months by making it fully autonomous.

Google, Nvidia, and others have showcased specialized chips designed by A.I., and electronic design automation (EDA) companies have already leveraged A.I. to speed up chip design. Software company Synopsys has a broader vision: Chips designed by A.I. from start to finish.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2021 09:13:50
Visa Enters Metaverse With First NFT Purchase
https://www.forbes.com/sites/ninabambysheva/2021/08/23/visa-enters-metaverse-with-first-nft-purchase/?sh=623ac6d668b3
Quote
On August 18, digital payments giant Visa spent $150,000 to buy a unique work of art, and in so doing quietly took its first step into the metaverse, a nascent online world that promises to transform the internet into a virtual reality.

Instead of canvas or marble, the pixelated artwork, named CryptoPunk 7610, is what’s known as a non-fungible token (NFT), a unique digital asset which, similarly to bitcoin, certifies the authenticity, ownership and provenance of any digital object written to a blockchain. One of the 10,000 24x24 pixel images of the CryptoPunk collection, generated algorithmically, Visa’s first NFT is an avatar of a female character, distinguishable by a mohawk, large green eyes and bright red lipstick.

However, the company didn’t actually custody the 49.5 ETH, paid for the token, or the asset itself. Instead, newly licensed bank, Anchorage, has helped facilitate the deal, and importantly became the first known U.S. bank to custody one of these novel assets.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/08/2021 16:05:39
We should be careful which metaverse we choose to live in

Quote

   The great thing about the future is you can make it up. If present day reality is messier than you had hoped, then you can construct an alternative one, where everything is much cleaner. So it is with the latest West Coast infatuation with the metaverse. Now that the Federal Trade Commission is hammering on Big Tech’s door and even the Taliban is using audio app Clubhouse, maybe it is time to add a shiny new dimension to the future. 

The term metaverse comes from Snow Crash, a 1992 science fiction novel by Neal Stephenson, in which human avatars and software daemons inhabit a parallel 3D universe. The term now has a life of its own and has cropped up recently in chief executive presentations from Microsoft’s Satya Nadella and Facebook’s Mark Zuckerberg. 

https://www.ft.com/content/bcac6b61-7b11-4469-99b7-c125311fa34d
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/08/2021 16:17:33
The good thing about virtualization is that it allows us to perform trial and error experimentations with less resources than doing them in real life. But to be useful, it must be related to objective reality, at least at some points.

Unicellular organisms perform those trial and error experiments all the time with their multiple duplicate copies. Some of them may survive in each generation, but most of them will die, which makes the experiment inefficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/08/2021 03:03:52
Quote
According to Boston Dynamics, Atlas uses “perception” to navigate the world. The company’s website states that Atlas uses “depth sensors to generate point clouds of the environment and detect its surroundings.” This is similar to the technology used in self-driving cars to detect roads, objects, and people in their surroundings.

This is another shortcut that the AI community has been taking. Human vision doesn’t rely on depth sensors. We use stereo vision, parallax motion, intuitive physics, and feedback from all our sensory systems to create a mental map of the environment. Our perception of the world is not perfect and can be duped, but it’s good enough to make us excellent navigators of the physical world most of the time.

https://venturebeat.com/2021/08/27/inside-boston-dynamics-project-to-create-humanoid-robots/
Title: Re: How close are we from building a virtual universe?
Post by: Eternal Student on 30/08/2021 03:41:06
Hi.
 
I'm a CS/P
  A Chartered Society of Physiotherapists  is what Google puts top of the list for the acronym.  So I've got to ask what is a CS  or a CSP?
Best Wishes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/08/2021 05:16:28
Well, depends what U mean, by build, and what size.
Just use standard definitions, unless stated otherwise.
Quote
build: construct (something) by putting parts or material together.
Quote
size: the relative extent of something; a thing's overall dimensions or magnitude; how big something is.

Actually, I have absolutely no doubt/s that our cosmos is a simulation, and that we are VR.
And I'm not the only one, who thinks so, of course.
(The universe is my real/physical/HW based/classical model and
  the cosmos   is my SW based/virtual/quantum model).

Quote
cos·mos: the universe seen as a well-ordered whole.
Quote
universe: all existing matter and space considered as a whole; the cosmos.
What makes you think that cosmos is virtual/quantum?
What makes you think that real/physical universe is classical?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/08/2021 05:21:40
The article shows how a virtual universe can be useful and practical.
https://scitechdaily.com/putting-a-super-cork-on-the-coronavirus-new-hope-in-the-battle-against-covid-19/
Quote
Therapeutic approach developed by Weizmann Institute scientists could spell new hope in the battle against COVID-19.

Even though vaccines may be steering the world toward a post-pandemic normal, a constantly mutating SARS-CoV-2 necessitates the development of effective drugs. In a new study published in Nature Microbiology, Weizmann Institute of Science researchers, together with collaborators from the Pasteur Institute, France, and the National Institutes of Health (NIH), USA, offer a novel therapeutic approach to combating the notorious virus. Rather than targeting the viral protein responsible for the virus entering the cell, the team of researchers addressed the protein on our cells’ membrane that enables this entry. Using an advanced artificial evolution method that they developed, the researchers generated a molecular “super cork” that physically jams this “entry port,” thus preventing the virus from attaching itself to the cell and entering it.

https://scitechdaily.com/new-achilles-heel-of-coronavirus-aptamer-molecule-attacks-coronavirus-in-a-novel-way
Quote
Active ingredient inhibits infection with so-called pseudoviruses in the test tube, as shown by study at the University of Bonn.

Scientists at the University of Bonn and the caesar research center have isolated a molecule that might open new avenues in the fight against SARS coronavirus 2. The active ingredient binds to the spike protein that the virus uses to dock to the cells it infects. This prevents them from entering the respective cell, at least in the case of model viruses. It appears to do this by using a different mechanism than previously known inhibitors. The researchers therefore suspect that it may also help against viral mutations. The study will be published in the journal Angewandte Chemie and is already available online.

The novel active ingredient is a so-called aptamer. These are short chains of DNA, the chemical compound that also makes up chromosomes. DNA chains like to attach themselves to other molecules; one might call them sticky. In chromosomes, DNA is therefore present as two parallel strands whose sticky sides face each other and that coil around each other like two twisted threads.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/08/2021 11:08:03
What makes me think our cosmos is a simulation? All the quantum paradoxes.
I have absolutely no doubt/s that something like that is possible only inside a computer, by a computer.
Have you considered a possibility that we have misunderstood something in those paradoxes? Some false assumptions, may be?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/08/2021 06:57:54
I/have wasted my life/time trying to explain all those paradoxes away, classically,
only to realise in my old age/the end that it can't be done.
Prior to Newton, movement of planets were impossible to explain naturally. Even Newton thought that electromagnetic phenomena were too mysterious.

Whenever we get unexpected result, there must be at least one false assumption that we've made, either explicitly or implicitly. We just need to identify all the assumptions that we've employed to get our expectations, and then identify which of them are not necessarily true.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/08/2021 07:00:32
"A quantum experiment suggests there’s no such thing as objective reality | MIT Technology Review" https://www.technologyreview.com/2019/03/12/136684/a-quantum-experiment-suggests-theres-no-such-thing-as-objective-reality
Quote
Physicists have long suspected that quantum mechanics allows two observers to experience different, conflicting realities. Now they’ve performed the first experiment that proves it.

Quote
The idea that observers can ultimately reconcile their measurements of some kind of fundamental reality is based on several assumptions. The first is that universal facts actually exist and that observers can agree on them.

But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.

If there is an objective reality that everyone can agree on, then these assumptions all hold.

But Proietti and co’s result suggests that objective reality does not exist. In other words, the experiment suggests that one or more of the assumptions—the idea that there is a reality we can agree on, the idea that we have freedom of choice, or the idea of locality—must be wrong.

Of course, there is another way out for those hanging on to the conventional view of reality. This is that there is some other loophole that the experimenters have overlooked. Indeed, physicists have tried to close loopholes in similar experiments for years, although they concede that it may never be possible to close them all.
Nevertheless, the work has important implications for the work of scientists. “The scientific method relies on facts, established through repeated measurements and agreed upon universally, independently of who observed them,” say Proietti and co. And yet in the same paper, they undermine this idea, perhaps fatally.


Claiming that there's no objective reality is extraordinary, hence requires extraordinary evidence. But we keep seeing this kind of researches from time to time. Perhaps that's what it takes to get more attention.
Before they put the blame on the existence of objective reality, perhaps they should scrutinize their experimental setups and theoretical model that they used to explain the situation.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 05:27:19
U may care to see (The Incredible) Halc's outstanding Best Answer to my Mach-Zehnder interferometer question,
and our no holds barred wrestling contest/match afterward/s. TIH vs TCC! I think I gave just as good as I got.


I've had a plan to make Mach-Zehnder interferometer using microwave for a while now, but it's kept pushed aside by other things. I'm curious of what would happen if the type of beam splitters are changed, e.g. replaced by polarizers.

I've already recorded some other experiments using radio, microwave, and laser. I just haven't had time to edit and upload all of the videos.

I'm affraid I'll just get even busier ahead, since I've got freelance side jobs to design the instrumentation and automation control system for production process plants. That's the kind of problem that prompted me to create this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 06:55:40
Tesla's AI day reveals many things which show how close we are from building a virtual universe.

Tesla Transformers! Why is Vector Space so critical to FSD?


Quote
During Tesla's AI day, Andrej Karpathy, director of AI and autopilot vision at Tesla, went into a great deal of detail about how and why Tesla engineers have expended massive effort to transform video images from Tesla cameras into abstracted vector spaces. The way they achieved this, and the results, are astounding. From Hydranets to Transformers, to conversion to vector space, Karpathy explained how Tesla vision full self driving takes images from the cameras and converts them to a depth sorted 2D top down map of the surroundings--all in real time!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 10:27:52
Your Tesla can plan ahead! Does that mean it's conscious?


Quote
During Tesla AI day on August 19th, Ashok Elluswamy, Tesla’s director of autopilot software, demonstrated that Teslas driving the FSD (Full Self Driving) beta 9 have an almost eerie ability to plan ahead for issues that might arise while driving. Some of this comes down to basic physics--knowing how heavy and how big your "ego" car is--but a lot of you Telsa's ability to plan comes down to the car route planning... for all the other agents in the scene (other cars, pedestrians, bikes, etc). This is crazy--and it got me thinking about a book by Christopher McDougall, Born to Run, which posits that human consciousness arose on the plains of Africa as early humanoids had to place an agent model (a version of their own brains) into that of their hunting companions and the target prey.
But wait, you say, this is just what a Tesla is doing when it route plans. Might your Tesla actually be conscious?!

We are going to see machines with self awareness, and capability to understand the behavior of other conscious agents. They can also choose appropriate instrumental goals to help achieving their terminal goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 13:10:38
On presentation or user interface front, we've got this.


Now Games Can Look Like Pixar Movies - Unreal Engine 5
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/09/2021 02:01:12
Virtual universe in the large scale.
Quote
Forget about online games that promise you a "whole world" to explore. An international team of researchers has generated an entire virtual universe, and made it freely available on the cloud to everyone.

Uchuu (meaning "outer space" in Japanese) is the largest and most realistic simulation of the universe to date. The Uchuu simulation consists of 2.1 trillion particles in a computational cube an unprecedented 9.63 billion light-years to a side. For comparison, that's about three-quarters the distance between Earth and the most distant observed galaxies. Uchuu reveals the evolution of the universe on a level of both size and detail inconceivable until now.


https://phys.org/news/2021-09-largest-virtual-universe-free-explore.html
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/09/2021 06:16:49
And here's the virtual universe closer to our everyday lives. The author is good at explaining technical concepts to lay persons.
How does Tesla manage to label ALL THAT DATA? And why does it even matter?? AI Day Part 6
Quote
On August 19th, during Tesla AI day, Andrej Karpathy, director of artificial intelligence and autopilot vision, dove into a topic that is distinctly not sexy, but absolutely necessary for modern machine learning: collecting and especially labeling data for training.
After covering how Tesla Vision converts 2D images into 3D vector space, and discussing how the cars can plan ahead not just for them, but for all other agents in the scene (you can watch my previous videos, linked above, for much more on this), Dr. Karpathy broached the topic of how Tesla deals with the mountains of data it’s 2 million car strong fleet produces now.
And while I thought I’d be bored by this section of the talk, I was, frankly, blown away by how brilliant Tesla’s data labeling strategy is, and also how much time, person power, and money Tesla has and is putting into labelling the best, most targeted data possible. Along with the incredible neural network architecture, this data labeling is what is enabling Tesla to achieve what seemed impossible just a short time ago: full autonomous driving using only cameras!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2021 05:07:22
OpenAI Codex: Just Say What You Want!

The paper "Evaluating Large Language Models Trained on Code" is available here:
https://openai.com/blog/openai-codex/

When we got technicality out of our way, we can be more focused on determining and achieving our terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2021 05:29:25
https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253
Are there any limits to large neural networks?
Quote
OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.
Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That’s why OpenAI took over this quest. To ensure it’d benefit everyone evenly: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
Quote
The holy trinity — Algorithms, data, and computers
OpenAI believes in the scaling hypothesis. Given a scalable algorithm, the transformer in this case — the basic architecture behind the GPT family —, there could be a straightforward path to AGI that consists of training increasingly larger models based on this algorithm.
But large models are just one piece of the AGI puzzle. Training them requires large datasets and large amounts of computing power.
Data stopped being a bottleneck when the machine learning community started to unveil the potential of unsupervised learning. That, together with generative language models, and few-shot task transfer, solved the “large datasets” problem for OpenAI.
They only needed huge computational resources to train and deploy their models and they’d be good to go. That’s why they partnered with Microsoft in 2019. They licensed the big tech company so they could use some of OpenAI’s models commercially in exchange for access to its cloud computing infrastructure and the powerful GPUs they needed.
Quote
What can we expect from GPT-4?
100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.
GPT-4 will have as many parameters as the brain has synapses.
Quote
OpenAI has been working nonstop in exploiting GPT-3’s hidden abilities. DALL·E was a special case of GPT-3, very much like Codex. But they aren’t absolute improvements, more like particular cases. GPT-4 promises more. It promises the depth of specialist systems like DALL·E (text-images) and Codex (coding) combined with the width of generalist systems like GPT-3 (general language).
And what about other human-like features, like reasoning or common sense? In that regard, Sam Altman says they’re not sure but he remains “optimistic.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 13:27:51
G'duy, neighbour. I'm from Oz/tralia. The fabled land of Oz.
Thank U for Ur MZI replies. U look to me like an electrical engineer, type.
Both Ur names are Islamic. I assume U're a good Muslim/believer.
Didn't U ever wonder how something like that can be implemented? Only in SW.
Good day, my neighbor.
My current work is more about plant control, automation and instrumentation, although I also have experience in leading in house utility plant in my site, as well as electrical maintenance team.
As you can see in my signature, unexpected results come from false assumptions. Perhaps you can check my other threads about philosophy and morality.
Something like that can also happen in real life.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 13:30:31
Here is another take on Tesla's AI day. It shows how close we are from building a virtual universe.

Watch Tesla’s Self-Driving Car Learn In a Simulation!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 13:33:15
"There is no classical explanation, so the universe is a simulation".
The classical explanation is not a single thing/version. I learned from the history of scientific progress. There might be a version which can give satisfactory answers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 17:43:48
NO! There is/are no classical explanation/s, for quantum paradoxes/phenomena.
What's your definition of classical physics?
What makes quantum physics different than classical counterpart?
Do you know that physics theories evolve over time? For both classical as well as quantum theories?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 17:45:27
But/t there is an explanation and it's a SW based universe/cosmos.
The software must run on the hardware. How does the hardware work?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 18:38:41
People often say that Newtonian mechanics is classical physics. So is Maxwellian electromagnetic theory. But they are incompatible with each other.
Newtonian optics and Huygen's optics are both classical theories, but they are also incompatible with each other.
Based on its name, quantum physics are different from classical ones due to quantization of energy transfers. In contrast, classical physics don't recognize such quantification. Although initially, Planck introduced his constant merely as proportionality factor, which says that a unit of oscillator on black body needs more energy to produce radiation with higher frequency. Interpreting it as quantification of energy transfer came later, proposed by Einstein. Modern quantum theory is significantly different than earlier versions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2021 03:02:24
I have no problem in accepting new theories. As long as it can explain observations better than the existing theory. I. e. it can explain more observations with less assumptions.

But if a theory forces us to abandon causality, I think it's time we need to look for some better alternatives. It's more likely that some errors have been made in deriving the theory, or interpreting the observation.

Consciousness works relying on the existence of causality. We make plans because we believe that our actions influence the results, not the other way around. And our own consciousness is the only unquestionable evidence of our own existence. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2021 07:21:47
100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.
GPT-4 will have as many parameters as the brain has synapses.

How Computationally Complex Is a Single Neuron?
Quote
Our mushy brains seem a far cry from the solid silicon chips in computer processors, but scientists have a long history of comparing the two. As Alan Turing put it in 1952: “We are not interested in the fact that the brain has the consistency of cold porridge.” In other words, the medium doesn’t matter, only the computational ability.

Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep neural networks. As their name suggests, deep neural networks were inspired by the real neural networks in the brain, with the nodes modeled after real neurons — or, at least, after what neuroscientists knew about neurons back in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?

To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902
Quote
“We tried many, many architectures with many depths and many things, and mostly failed,” said London. The authors have shared their code to encourage other researchers to find a clever solution with fewer layers. But, given how difficult it was to find a deep neural network that could imitate the neuron with 99% accuracy, the authors are confident that their result does provide a meaningful comparison for further research. Lillicrap suggested it might offer a new way to relate image classification networks, which often require upward of 50 layers, to the brain. If each biological neuron is like a five-layer artificial neural network, then perhaps an image classification network with 50 layers is equivalent to 10 real neurons in a biological network.
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 17/09/2021 02:23:59
People often say that Newtonian mechanics is classical physics. So is Maxwellian electromagnetic theory. But they are incompatible with each other.
Classic means non-quantum, and not all non-quantum thoeries are compatible with each other. Under classical physics, objects exist even unmeasured. They have a defined state at all times even if it isn't known. The moon is there even when nobody is looking at it, so to speak. Cause comes before effect and information cannot travel faster than light (the latter not being true under Newtonian physics).
None of this is necessarily the case with quantum mechanics. The rules differ from one interpretation to the next, but the empirical measurements do not. If one is to implement a simulation, one must choose an interpretation to simulate. Without that, you'd be implementing a thing without any design.

I've not read most of this thread. It's quite long, but typical of such assertions, there is never an eye given to looking for problems with the proposal. Only positive evidence is presented. This is known as the selection bias fallacy.
Address the problems. Actively seek them, else the idea will be shot down effortlessly when other do.

Have U heard about the Quantum Eraser?
Either the photons (can) travel back in time or the universe is implemented in SW
This is incorrectly stated. No interpretation of QM suggest either. The choice is: Either there is reverse causality (effect before cause) or there is no state in the absence of measurement. The quantum eraser experiments are actually really hard evidence against a simulation.
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back). The simulation could never make forward progress.

Alternatively one could simulate a local interpretation of quantum mechanics, none of which require reverse causality like that. But the problem is you sacrifice state. If there's no current state, how can the next one be computed?
I cannot think of an algorithm that would simulate either kind of interpretation, and it has been proven that there cannot be one that has both real state and also locality. That means that no classic algorithm can implement quantum mechanics at all, and thus any simulation would have to be at a classic level, which sounds intuitively plausible until one recognizes how much quantum effects effect just about everything we see every day. Without that, rainbows, electronics and nerve cells cannot work. The simulation would need to glean the purpose of every effect and change the physics accordingly.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/09/2021 09:40:19
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).
Simulations can usually also work backward. Based on current states, previous states can be calculated, just like next states. That's the basis for Laplace's demon.

Which experiment are you referring to?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/09/2021 05:40:15
Any of the quantum eraser experiments, which, given an interpretation where say photons have position and state which 'in flight', demonstrate that effects now are a function not only of immediate prior local state, but distant state and events that don't take place until long into the future.
Quantum eraser experiments can be explained without discarding causality using wave mechanics with appropriately chosen assumptions. I discuss this problem in more detail in another thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/09/2021 05:50:51
You're proposing that our universe is a simulation that is running blackwards?
I'm not the one who proposed that our universe is a simulation. IMO, it would generate more unnecessary complexity, rather than offering solutions to our problems.
A simulation is a simplified model to represent the real system which is presumably more complex.The simulation can help us predict the result of trial and error with less resources compared to doing them in real systems.The simulation doesn't have to use computer. It just happens that computer simulation is easier to duplicate and modify as needed.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/09/2021 07:58:56
Musk argues for a virtual reality, not a simulation, despite whatever word he might choose for it.
He literally used the word simulation in interviews and tweets. He's likely influenced by Nick Bostrom's idea.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/09/2021 05:42:17
I've not read most of this thread. It's quite long, but typical of such assertions, there is never an eye given to looking for problems with the proposal. Only positive evidence is presented. This is known as the selection bias fallacy.
Address the problems. Actively seek them, else the idea will be shot down effortlessly when other do.

The background of opening this thread can be found in my opening statement.
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.



The problem is, without adequately accurate, precise, and relevant virtual universe, we are expected to face many surprises in the future. They would make our plans less effective and efficient, which in turn makes it harder to achieve our goals.
 
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.
Quote
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
https://pathmind.com/wiki/neural-network
Quote
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.

That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 21/09/2021 04:39:22
Musk argues for a virtual reality, not a simulation, despite whatever word he might choose for it.
He literally used the word simulation in interviews and tweets. He's likely influenced by Nick Bostrom's idea.
Yes, well both words are often used to refer to either concept, but the two concepts are quite distinct, and if the person proposing it doesn't know which is which, then they haven't really thought about it much.

The background of opening this thread can be found in my opening statement.
You seem to be proposing what I call a VR, which is an artificial sensory feed into one or more real people or other minds, each of which controls an avatar in the simulated world. You link in your OP to an article on digital twins, which is exactly this. The article even uses the word avatar.  Musk also proposes such a thing.
It's dualism. The non-VR simulation (what Bostrom proposes) is monism: Nobody external controlling and of the simulated things. They are free to do what they want instead of what some puppeteer wants. There are empirical tests for both, but not the same ones.

Anyone who proposes the VR idea but then references Bostrom's work (or v-v) doesn't really know what they're talking about. The latter for instance doesn't require heavy computing power. It can proceed fast or slow, be done on pencil and paper, and even be shelved for months at time when server time is more available.  A VR can't do that and must keep up with real time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2021 07:08:13
You can say that I have selection bias. But I can't help selecting my information sources from those who've made accurate predictions and use sensible models of objective reality to predict the future, and help in making better decisions.
Virtual presentation to the Council of State Governments on the occasion of the CSG East 2021 Annual Meeting.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2021 07:12:35
You seem to be proposing what I call a VR,
You can even read it in the title. We can say that the virtual universe is the result of integration from many VR systems. Lately we can see/read/hear/watch the news about Metaverse, which will combine several VR systems into one integrated system, such as gaming, working/collaboration, education, entertainment, competition, advertisements, even financial systems.

IMO, objects in simulations don't represent any specific/particular instance of objects in reality. They may have some resemblances though. On the other hand, some objects in VR must be an avatar representing a particular real object. So, your thinking is in line with mine.

But the differences can be less obvious in some circumstances. In training mode, AlphaGo runs as simulation, with pieces of Go moves around without representing any particular pieces of Go in reality. But in the trournament against Lee Sedol, it becomes a VR, where some of the pieces must represent Lee's pieces in real world.
Title: Re: How close are we from building a virtual universe?
Post by: BilboGrabbins on 22/09/2021 16:52:16
If there is a monopole, then maybe in a hundred
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 23/09/2021 01:03:35
You can say that I have selection bias.
And then you select another video in support instead of one identifying the issues.

Nobody has put together a VR where the guy doing it is unaware it has been done, and is unable to exit it if he wants.
What if he has to use the restroom (in reality)? Nobody's been in one longer than they can hold their bladder.
Sure, you can jam in a catheter, but how did you get in this virtual reality in the first place without knowing it?  Are all the people you meet virtually controlled avatars like yourself, or are most of them NPC's or what? What about dogs or birds or gnats? What if I want to be one of those?

In training mode, AlphaGo runs as simulation, with pieces of Go moves around without representing any particular pieces of Go in reality. But in the trournament against Lee Sedol, it becomes a VR, where some of the pieces must represent Lee's pieces in real world.
A go-playing computer (AI or not) is not a VR. I suppose it could have a VR interface to let you experience playing the game with a physical-looking character, but to play an external entity, all it needs is a USB cord.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 04:33:08
Here's another progress we made so far.
Quote
https://pub.towardsai.net/facebooks-parlai-is-a-framework-for-building-human-like-conversational-agents-99711c351fc9
Conversational interfaces powered by natural language processing(NLP) have been at the center of the artificial intelligence(AI) revolution of the last few years. When we see the advancements in digital assistants such as Siri or Alexa, we might be tempted to think that conversational applications are a solved problem. That couldn’t be further from the truth. The current generation of conversational interfaces is far from simulating human-like dialogues. Building advanced NLP systems remains an incredibly challenging task that. To address that challenge, Facebook open sourced ParlAI, a platform for advancing the evaluating of NLP systems. Recently, ParlAI got an update with new models, datasets, and a fun bot to play with which I would like to cover in this two-part article. The first part of the article will introduce the core concepts behind ParlAI while the second will focus on some of the newest capabilities targeted to advance dialogue research.
...
The ultimate goal of NLP is to enable interactions with chatbots that mimic the dynamics of human conversations. For that to happen, we need systems that can go beyond understanding a single sentence or taking discrete actions. Advanced conversational applications require understanding long-form sentences in specific contexts while balancing human-like aspects such as specificity and empathy.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 04:33:57
And then you select another video in support instead of one identifying the issues.
What's your issue with that?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 04:37:24
Sure, you can jam in a catheter, but how did you get in this virtual reality in the first place without knowing it?
Perhaps kidnapped when asleep, and use some anesthesia.
The biological agent can be just organoid brain, never really have complete body in the first place.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 07:55:19
A go-playing computer (AI or not) is not a VR. I suppose it could have a VR interface to let you experience playing the game with a physical-looking character, but to play an external entity, all it needs is a USB cord.
The transition from simulation to VR is not a single step function. It's more gradual like greyscale.
Let's start with a system which you can confidently say as  VR. Then reduce its resolution in visualization, such as pixel number in the viewing window, or box size like in Minecraft. How low can we go until it stops being a VR?
Another route to get the minimum requirement for VR is by reducing the degree of freedom that the external agent has to change the virtual objects. In 4d theater, the external agents have no control over the virtual objects. Other systems have various levels of control. 
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 24/09/2021 17:51:27
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).
Simulations can usually also work backward.
But virtual realities cannot.
So such reverse-causality experiments seem to be a decent falsification of the VR hypothesis.

Perhaps kidnapped when asleep, and use some anesthesia.
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Billions of people exit a world with a capability of initiating such a VR (and kidnapping billions of people at once) and involuntarily enter a world where that capability isn't there at all. So it's not a reality in any way similar to the one they were in a moment ago. Yea, you'd notice that. Think it through before suggesting something like that.

Quote
The biological agent can be just organoid brain, never really have complete body in the first place.
All these articles that your reference (digital twin, Musk's assertions, etc) are claims that it is a world like ours, humans doing it to other humans, not disembodied minds put into non-native virtual bodies.
If you deny those proposals, then it becomes a straight up BIV scenario, subject to a god-of-the-gaps fallacy. Invent a higher realm beyond empirical investigation, and then hand-wave all the inconsistencies to that layer saying they're dealt with there. It's a cop out for actual analysis. The arguments against it are same as the BIV counterarguments. Most of the models collapse to solipsism.

How is any of this any different from basic Chalmers dualism then? Anything with an experiencer is conscious. Anything not is a philosophical zombie, or P-zombie, which is the equivalent to an NPC* in a virtual reality. Chalmers doesn't go so far as to claim that God is so weak that he needs a big computer to provide his virtual experience to all the minds he puts into it.

* I notice that you didn't respond to my NPC questions in my prior post. NPC is a standard video game term.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 06:14:35
But virtual realities cannot.
So such reverse-causality experiments seem to be a decent falsification of the VR hypothesis.
What's the VR hypothesis?
Simulation can run backward, slowed down, or fast forward because every object in it is under its control. VR can't access all parameters of outside world. Even if a VR is advanced enough to manipulate a brainoid using electrochemical signals, someone outside can simply crash it which destroy the VR's plan.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 06:16:33
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Do you always realize when you're dreaming?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 06:36:01
Are all the people you meet virtually controlled avatars like yourself, or are most of them NPC's or what? What about dogs or birds or gnats? What if I want to be one of those?
It looks like you forget that I'm not suggesting that we are currently living in a simulation nor VR.
If the VR is good enough, we can't distinguish between NPCs and avatars unless we can go outside of VR and meet them in person.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 07:00:32
All these articles that your reference (digital twin, Musk's assertions, etc) are claims that it is a world like ours, humans doing it to other humans, not disembodied minds put into non-native virtual bodies.
The digital twin is currently a real world commercial product. Many chemical companies are already using it.
It's totally different from Musk's assertion that we're living in a simulation. He may not be serious about it, considering his efforts to make humans multiplanetary. What's the point if we're merely a simulation?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 09:15:49
The transition from simulation to VR is not a single step function. It's more gradual like greyscale.
Let's start with a system which you can confidently say as  VR. Then reduce its resolution in visualization, such as pixel number in the viewing window, or box size like in Minecraft. How low can we go until it stops being a VR?
Another route to get the minimum requirement for VR is by reducing the degree of freedom that the external agent has to change the virtual objects. In 4d theater, the external agents have no control over the virtual objects. Other systems have various levels of control. 
Can it still be called VR if it lacks the sensation of touch, heat,  taste,  and smell? What if it exclude the effects of ultraviolet and infrared light?
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 27/09/2021 02:26:56
It seems I’m missing a lot of these posts.  Not sure why.
It looks like you forget that I'm not suggesting that we are currently living in a simulation nor VR.
Oh OK. Many of the people you reference are suggesting exactly that. My counterarguments need to be addressed by them, but the articles I see only seem to seek attention for the idea instead of identify a plausible model that holds up to scrutiny.

Quote
If the VR is good enough, we can't distinguish between NPCs and avatars unless we can go outside of VR and meet them in person.
There’s a test. Decisions made in my character’s brain are suppressed so my will can override it. So detect that: There is a total disconnect between brain and voluntary action, whereas the NPC has a functional connection between the two.

Concerning how one enters the VR:
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Do you always realize when you're dreaming?
Pretty irrelevant. Reality never feels like a dream. Dreams don’t rewrite all my memories. If the machine could do that, no experience feed would be necessary. All it would need to do put you in a state of having remembered them. Solves the bladder issue too. It boils down to last-Tuesdayism then. There’s no proof that the universe wasn’t created last Tuesday, or 3 seconds ago for that matter.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2021 03:21:42
Reality never feels like a dream.
Some dreams can feel like reality.
In some conditions, reality can feel like a dream, like when we're under influence of psychedelics. Lack of sleep can also do that.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2021 03:26:27
There’s no proof that the universe wasn’t created last Tuesday, or 3 seconds ago for that matter.
We can rely on Occam's razor for practical matters. What do we get by believing that universe was created last Tuesday?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2021 08:19:33
Question is, what do you learn by attempting to falsify that the universe was created last Tuesday? If you shorten it to 'just now', it boils down to a Boltzmann brain. Just as hard to falsify that one.
Not much. It's just impractical and wasting resources without apparent benefits. So it would be better to just ignore it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/10/2021 07:17:00
https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253?gi=98c60e44681b
GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3
Are there any limits to large neural networks?

Quote
OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.
Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That’s why OpenAI took over this quest. To ensure it’d benefit everyone evenly: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
However, the magnitude of this problem makes it arguably the single biggest scientific enterprise humanity has put its hands upon. Despite all the advances in computer science and artificial intelligence, no one knows how to solve it or when it’ll happen.
Some argue deep learning isn’t enough to achieve AGI. Stuart Russell, a computer science professor at Berkeley and AI pioneer, argues that “focusing on raw computing power misses the point entirely […] We don’t know how to make a machine really intelligent — even if it were the size of the universe.”
OpenAI, in contrast, is confident that large neural networks fed on large datasets and trained on huge computers are the best way towards AGI. Greg Brockman, OpenAI’s CTO, said in an interview for the Financial Times: “We think the most benefits will go to whoever has the biggest computer.”
And that’s what they did. They started training larger and larger models to awaken the hidden power within deep learning. The first non-subtle steps in this direction were the release of GPT and GPT-2. These large language models would set the groundwork for the star of the show: GPT-3. A language model 100 times larger than GPT-2, at 175 billion parameters.
GPT-3 was the largest neural network ever created at the time — and remains the largest dense neural net. Its language expertise and its innumerable capabilities were a surprise for most. And although some experts remained skeptical, large language models already felt strangely human. It was a huge leap forward for OpenAI researchers to reinforce their beliefs and convince us that AGI is a problem for deep learning.
Quote
Unlike GPT-3, it probably won’t be just a language model. Ilya Sutskever, the Chief Scientist at OpenAI, hinted about this when he wrote about multimodality in December 2020:
“In 2021, language models will start to become aware of the visual world. Text alone can express a great deal of information about the world, but it is incomplete, because we live in a visual world as well.”
We already saw some of this with DALL·E, a smaller version of GPT-3 (12 billion parameters), trained specifically on text-image pairs. OpenAI said then that “manipulating visual concepts through language is now within reach.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/10/2021 13:12:05
https://psyche.co/ideas/the-brain-has-a-team-of-conductors-orchestrating-consciousness
Quote
This new framework points to a view of the brain as a fusion of the local and the global, arranged in a hierarchical manner. In this context, some researchers including Marsel Mesulam have suggested that the human brain is in fact hierarchically organised, a view that fits well with our orchestra metaphor. Yet, given the distributed nature of the brain hierarchy, there is unlikely to be just a single ‘conductor’. Instead, in 1988 the psychologist Bernard Baars proposed the concept of a ‘global workspace’, where information is integrated in a small group of brain regions (or ‘conductors’) before being broadcast to the whole brain.
Quote
This processing becomes ever more complex; higher up in the hierarchy, brain regions integrate all the small segments that make up an object, such as a human face. In his book The Man Who Mistook his Wife for a Hat (1985), Oliver Sacks wrote about what happens if you have a stroke or lesion to this brain area: namely, you’re no longer able to recognise faces.

Higher still in the hierarchical processing of environmental information there’s more integration, fusing different ongoing sensory modalities (such as sight and sound) with previous memories. This processing is further influenced by reward and expectations and by any surprising deviations from previous experiences. In other words, at the highest level of the hierarchy, the ‘global workspace’ must somehow integrate information from perceptual, long-term memory and evaluative and attentional systems to orchestrate goal-directed behaviour.

The information flow within this hierarchy is highly dynamic; not just bottom-up but also top-down. In fact, recurrent interactions shape the functional processing underlying cognition and behaviour. Much of this information flow follows the underlying anatomy in the structural connections between brain regions but, equally, the information flow is largely unconstrained by this anatomical wiring.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/10/2021 07:31:52
https://jrodthoughts.medium.com/what-is-meta-reward-learning-4badbf2c95a8
Quote
Reinforcement learning has been at the center of some of the biggest artificial intelligence(AI) breakthroughs of the last five years. In mastering games like Go, Quake III or StarCraft, reinforcement learning models demonstrated that they can surpass human performance and create unique long-term strategies never explored before. Part of the magic of reinforcement learning relies on regularly rewarding the agents for actions that lead to a better outcome. That models works great in dense reward environments like games in which almost every action correspond to a specific feedback but what happens if that feedback is not available? In reinforcement learning this is known as sparse rewards environments and, unfortunately, it’s a representation of most real-world scenarios. A couple of years ago, researchers from Google published a new paper proposing a technique for achieving generalization with reinforcement learning that operate in sparse reward environments.

Quote
The overall challenge of reinforcement learning in sparse reward environment relies on achieving good generalization with limited feedback. More specifically, the process of achieving robust generalization in sparse reward environments can be summarized in two main challenges:
1) The Exploration — Exploitation Balance: An agent that operates using sparse rewards needs to balance when to take actions that lead to an immediate outcome versus when to explore the environment further in order to gather better intelligence. The exploration-exploitation dilemma is the fundamental balance that guides reinforcement learning agents.
2) Processing Unspecified Rewards: The absence of rewards in an environment is as difficult to manage as the surfacing of unspecified rewards. In sparse reward scenarios, agents are not always trained on specific types of rewards. After receiving a new feedback signal, a reinforcement learning agent needs to assess whether this one constitutes an indication of success or failure which is not always trivial.
Quote
Introducing MeRL
Meta Rewards Learning(MeRL) is Google’s proposed method for teaching reinforcement learning agents to generalize in environments with sparse rewards. The key contribution of MeRL is to effectively processing unspecified rewards without affecting the agent’s generalization performance. In our maze game example, an agent might accidentally arrive to a solution but, if it learns to perform spurious actions during training, it is likely to fail when provided with unseen instructions. To address this challenge, MeRL optimizes a more refined auxiliary reward function, which can differentiate between accidental and purposeful success based on features of action trajectories. The auxiliary reward is optimized by maximizing the trained agent’s performance on a hold-out validation set via meta learning.

I'd like to share this great article here. It contains important information which is also relevant to my other threads about universal morality and terminal goal. I decided to post it here because it emphasizes on technicality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/10/2021 08:46:35
Quote
https://venturebeat.com/2021/10/12/deepmind-is-developing-one-algorithm-to-rule-them-all/

The birth of neural algorithmic reasoning
Charles Blundell and Petar Veličković both hold senior research positions at DeepMind. They share a background in classical computer science and a passion for applied innovation. When Veličković met Blundell at DeepMind, a line of research known as Neural Algorithmic Reasoning (NAR), was born, after the homonymous position paper recently published by the duo.

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

The article shows how close we are from building a virtual universe of our own source of consciousness.

Quote
The ultimate goal is to build an observatory that can integrate data from all these projects into one grand, unified picture. Four years ago, with that in mind, researchers at the big-brain projects got together to create the International Brain Initiative, a loose organization with the principal task of helping neuroscientists to find ways to pool and analyse their data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/10/2021 12:39:18
https://www.nature.com/articles/d41586-021-02661-w
How the world’s biggest brain maps could transform neuroscience
Quote

Scientists around the world are working together to catalogue and map cells in the brain. What have these huge projects revealed about how it works?


Imagine looking at Earth from space and being able to listen in on what individuals are saying to each other. That’s about how challenging it is to understand how the brain works.

From the organ’s wrinkled surface, zoom in a million-fold and you’ll see a kaleidoscope of cells of different shapes and sizes, which branch off and reach out to each other. Zoom in a further 100,000 times and you’ll see the cells’ inner workings — the tiny structures in each one, the points of contact between them and the long-distance connections between brain areas.

Scientists have made maps such as these for the worm1 and fly2 brains, and for tiny parts of the mouse3 and human4 brains. But those charts are just the start. To truly understand how the brain works, neuroscientists also need to know how each of the roughly 1,000 types of cell thought to exist in the brain speak to each other in their different electrical dialects. With that kind of complete, finely contoured map, they could really begin to explain the networks that drive how we think and behave.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/10/2021 13:37:17
Quote
Last year, the Max Planck Institute for Intelligent Systems organized the Real Robot Challenge, a competition that challenged academic labs to come up with solutions to the problem of repositioning and reorienting a cube using a low-cost robotic hand. The teams participating in the challenge were asked to solve a series of object manipulation problems with varying difficulty levels.
https://techxplore.com/news/2021-10-robotic-dexterous-skills-simulations-real.html

Quote
"Our objective was to use learning-based methods to solve the problem introduced in last year's Real Robot Challenge in a low-cost manner," Animesh Garg, one of the researchers who carried out the study, told TechXplore. "We are particularly inspired by previous work on OpenAI's Dactyl system, which showed that it is possible to use model free Reinforcement Learning in combination with Domain Randomization to solve complex manipulation tasks."

Quote
"The process we followed consists of four main steps: setting up the environment in physics simulation, choosing the correct parameterization for a problem specification, learning a robust policy and deploying our approach on a real robot," Garg explained. "First, we created a simulation environment corresponding to the real-world scenario we were trying to solve."

 It shows that having a relevant, accurate, and precise virtual universe can help improve the efficiency of our efforts to achieve goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/10/2021 14:55:34
An update of current progress.

Google's Gated Multi-Layer Perceptron Outperforms Transformers Using Fewer Parameters
https://www.infoq.com/news/2021/10/google-mlp-vision-language/
Quote
Researchers at Google Brain have announced Gated Multi-Layer Perceptron (gMLP), a deep-learning model that contains only basic multi-layer perceptrons. Using fewer parameters, gMLP outperforms Transformer models on natural-language processing (NLP) tasks and achieves comparable accuracy on computer vision (CV) tasks.

The model and experiments were described in a paper published on arXiv. To investigate the necessity of the Transformer's self-attention mechanism, the team designed gMLP using only basic MLP layers combined with gating, then compared its performance on vision and language tasks to previous Transformer implementations. On the ImageNet image classification task, gMLP achieves an accuracy of 81.6, comparable to Vision Transformers (ViT) at 81.8, while using fewer parameters and FLOPs. For NLP tasks, gMLP achieves a better pre-training perplexity compared with BERT, and a higher F1 score on the SQuAD benchmark: 85.4 compared to BERT's 81.8, while using fewer parameters.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/10/2021 07:39:48

Quote
Tesla has redefined how numbers are formatted for computers--especially for deep neural network training! In a recent white paper, Tesla proposed the CFloat format as a standard. What is CFloat? How are numbers stored in a computer? And what does all this have to do with bandwidth and memory and efficiency? Let's go into full nerd mode and find out!

Here's an example of real world application of information specifications: relevance, accuracy, and precision. To achieve efficiency, those parameters need to be balanced.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/11/2021 06:12:02
China - Surveillance state or way of the future?
Quote
China is building a huge digital surveillance system. The state collects massive amounts of data from willing citizens: the benefits are practical, and people who play by the rules are rewarded.

Critics call it "the most ambitious Orwellian project in human history." China's digital surveillance system involves massive amounts of data being gathered by the state. In the so-called "brain" of Shanghai, for example, authorities have an eye on everything. On huge screens, they can switch to any of the approximately one million cameras, to find out who’s falling asleep behind the wheel, or littering, or not following Coronavirus regulations. "We want people to feel good here, to feel that the city is very safe," says Sheng Dandan, who helped design the "brain." Surveys suggest that most Chinese citizens are inclined to see benefits as opposed to risks: if algorithms can identify every citizen by their face, speech and even the way they walk, those breaking the law or behaving badly will have no chance. It’s incredibly convenient: a smartphone can be used to accomplish just about any task, and playing by the rules leads to online discounts thanks to a social rating system.

That's what makes Big Data so attractive, and not just in China. But where does the required data come from? Who owns it, and who is allowed to use it? The choice facing the Western world is whether to engage with such technology at the expense of social values, or ignore it, allowing others around the world to set the rules.
We need to determine and prioritize which social values are the most important, and which are expendable? It requires identification of common terminal goals. The universal terminal goal is the most common of them all.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/11/2021 05:37:15
https://scitechdaily.com/surprisingly-smart-artificial-intelligence-sheds-light-on-how-the-brain-processes-language/
Quote
They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/11/2021 10:30:33
http://email.mg.lesserwrong.com/c/eJw9jtsKgzAQRL8mvhk2G2P0IQ-F4n_k5qVVU5JY-_nVCoVhl1lmD-OU7rGteDEph9KDrWxpLNjScWhKaRpZSnRgmNAoeiQVzD4lH_cY1oHasBSjsjXq2rem6i3X2La84QZAAmus9EKLYlZjzq9E-I1gd2jfd3pi_pDjFrZMeLfFmfD7lUZx5sX5cQwdP9ObhjhczqTfRsYYBUAGRVSPLWXq9DrR8DyKDoue5pP-BRrdRE8

Quote
Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 190.4% mean human performance and 116.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data.

EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at this https URL. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.

This work is supported by the Ministry of Science and Technology of the People’s Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No. 2021AAA0150000).
The last innovation humans need to make is AI that's more effective and efficient in learning new things. We are getting closer to that point.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/11/2021 22:04:31
Quote
https://pub.towardsai.net/openais-approach-to-solve-math-word-problems-b69ed6cc90de
OpenAI’s Approach to Solve Math Word Problems
A new research paper and dataset look to make progress in one of the toughest areas of deep learning.

Mathematical reasoning has long been considered one of the cornerstones of human cognition and one of the main bars to measure the “intelligence” of language models. Take the following problem:
“Anthony had 50 pencils. He gave 1/2 of his pencils to Brandon, and he gave 3/5 of the remaining pencils to Charlie. He kept the remaining pencils. How many pencils did Anthony keep?”
Yes, the solution is 10 pencils but that’s not the point 😉. Solving this problem does not only entail reasoning through the text but also orchestrating a sequence of steps to arrive at the solution. This dependency on language interpretability as well as the vulnerability to errors in the sequence of steps represents the two major challenges when building ML models that can solve math word problems. Recently, OpenAI published new research proposing an interesting method to tackle this type of problem.
It's another breakthrough towards the emergence of AGI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/11/2021 13:29:01
Microsoft Metaverse vs Facebook Metaverse (Watch the reveals)
Quote
Microsoft's Satya Nadella recently showcased his company's foray into the Metaverse at its Ignite conference. This comes on the heels of Facebook's recent Connect conference when Mark Zuckerberg announced he is changing its name to Meta, short for Metaverse.

See how both CEOs are moving full steam ahead with VR technologies that they hope will make it possible to collaborate easier in this digital space.

I think that they put too much emphasis on users' feelings and emotions, instead of necessities and functionalities, not to mention efficiency. But those are arguably one of the most reliable ways to generate revenue, and make people voluntarily reach deeper into their pockets.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/11/2021 03:57:54
If Artificial General Intelligence ever reaches Singularity...

Could then Humans leave the roles of Creating Social Laws, Upholding the Constitutional Values & seeing to it that they are being followed...

In short, could a Super A.I. then be a Leader, Judge & Cop? What they do are basically collecting and processing information to make decisions. Cops working at the field also have some physical things to do, but that's not really a big problem for AI.

Or would even AI learn the magic trick of corruption & start accepting rabbity bribes?
Creating proper Social Laws and Constitutional Values are instrumental goal to help achieving the terminal goal. Misidentification of the terminal goal, inaccurate perception of objective reality, or inaccurate cause and effect relationships among different things can bring unintended results.

In short, what could stop a Super A.I. from being a Leader, Judge & Cop?

What makes humans possessing power learn the magic trick of corruption & start accepting rabbity bribes? IMO, it's desire to get pleasure and avoid pain, which are meta rewards naturally emerged from evolutionary process. To prevent the AI from going to the same path, they must be assigned the appropriate terminal goal and meta rewards from the first time they are designed.

I decided to continue the topic here to avoid hijacking someone else's thread. Let's hear what the experts think and decide which side we agree more.

Quote
https://www.technologyreview.com/2020/03/27/950247/ai-debate-gary-marcus-danny-lange/
A debate between AI experts shows a battle over the technology’s future
The field is in disagreement about where it should go and why.

Since the 1950s, artificial intelligence has repeatedly overpromised and underdelivered. While recent years have seen incredible leaps thanks to deep learning, AI today is still narrow: it’s fragile in the face of attacks, can’t generalize to adapt to changing environments, and is riddled with bias. All these challenges make the technology difficult to trust and limit its potential to benefit society.

On March 26 at MIT Technology Review’s annual EmTech Digital event, two prominent figures in AI took to the virtual stage to debate how the field might overcome these issues.

Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, is a well-known critic of deep learning. In his book Rebooting AI, published last year, he argued that AI’s shortcomings are inherent to the technique. Researchers must therefore look beyond deep learning, he argues, and combine it with classical, or symbolic, AI—systems that encode knowledge and are capable of reasoning.

Danny Lange, the vice president of AI and machine learning at Unity, sits squarely in the deep-learning camp. He built his career on the technique’s promise and potential, having served as the head of machine learning at Uber, the general manager of Amazon Machine Learning, and a product lead at Microsoft focused on large-scale machine learning. At Unity, he now helps labs like DeepMind and OpenAI construct virtual training environments that teach their algorithms a sense of the world.

Danny, do you agree that we should be looking at these hybrid models?

Danny Lange: No, I do not agree. The issue I have with symbolic AI is its attempt to try to mimic the human brain in a very deep sense. It reminds me a bit of, you know, in the 18th century if you wanted faster transportation, you would work on building a mechanical horse rather than inventing the combustion engine. So I’m very skeptical of trying to solve AI by trying to mimic the human brain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2021 05:37:41
System Dynamics: Systems Thinking and Modeling for a Complex World
Quote
This one-day workshop explores systems interactions in the real world, providing an introduction to the field of system dynamics. It also serves as a preview of the more in-depth coverage available in courses offered at MIT Sloan such as 15.871 Introduction to System Dynamics, 15.872 System Dynamics II, and 15.873 System Dynamics for Business and Policy.
Building a virtual universe is essentially unifying interrelated models to represent the complex world so we can make correct decisions to achieve our goals effectively and efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2021 08:34:20
Github Copilot: Good or Bad?
It seems like coding/programming/interface between humans and machines will be more accessible for more people.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/11/2021 12:16:59
Quote
https://www.business2community.com/online-marketing/googles-latest-ai-breakthrough-mum-02414144

In May 2021, Google unveiled a new search technology called Multitask Unified Model (MUM) at the Google I/O virtual event. This coincided with an article published on The Keyword, written by Vice President of Search, Pandu Nayak, detailing Google’s latest AI breakthrough.

In essence, MUM is an evolution of the same technology behind BERT but Google says the new model is 1,000 times more powerful than its predecessor. According to Pandu Nayak, MUM is designed to solve one of the biggest problems users face with search: “having to type out many queries and perform many searches to get the answer you need.”
Quote
Here’s how Pandu Nayak describes MUM in his announcement:

“Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models.”
We are witnessing a progress where machines will understand humans better than humans understand themselves.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/11/2021 01:08:29
Quote

Artificial intelligence powers protein-folding predictions
Deep-learning algorithms such as AlphaFold2 and RoseTTAFold can now predict a protein’s 3D shape from its linear sequence — a huge boon to structural biologists.

https://www.nature.com/articles/d41586-021-03499-y

Protein designers could also see benefits. Starting from scratch — called de novo protein design — involves models that are generated computationally but tested in the lab. “Now you can just immediately use AlphaFold2 to fold it,” says Zhang. These results can even be used to retrain the design algorithms to produce more-accurate results in future experiments.
It's a new tool to design biology using first principle, instead of trial and error, which will save resources as well as avoiding ethical problems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2021 07:15:31
The article shows that distinctions between robots and organisms are being less obvious.
Quote
(CNN)The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce -- and in a way not seen in plants and animals.

Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal.

"Most people think of robots as made of metals and ceramics but it's not so much what a robot is made from but what it does, which is act on its own on behalf of people," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont and lead author of the study.
"In that way it's a robot but it's also clearly an organism made from genetically unmodified frog cell."
https://www.cnn.com/2021/11/29/americas/xenobots-self-replicating-robots-scn/index.html
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2021 08:54:51
The article shows that we are getting closer to understanding and simulating human minds.

Quote

Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail.

https://www.newyorker.com/magazine/2021/12/06/the-science-of-mind-reading
Title: Re: How close are we from building a virtual universe?
Post by: Origin on 30/11/2021 13:29:04
Sorry to interrupt your blog, I just wanted to say we are no where near being able to build a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2021 21:35:53
But don't forget that the progress is exponential. It might seem slow at first, but it gets faster over time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/12/2021 13:43:25

Near the end of the video he summarizes that neural networks are essentially like compression and decompression data processing to make decision.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/12/2021 06:00:39
Quote
Biotechnology/Nanotechnology | Andrew Hessel | SingularityU Germany Summit 2017
Andrew Hessel is a futurist and catalyst in biological technologies, helping industry, academics, and authorities better understand the changes ahead in life science. He is a Distinguished Researcher with Autodesk Inc. Bio/Nano Programmable Matter group, based out of San Francisco. He is also the co-founder of the Pink Army Cooperative, the world first cooperative biotechnology company, which is aiming to make open source viral therapies for cancer.
This is a quite old video, but it may contains new things for some of us. I expect we have moved even further.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/12/2021 14:47:08
How Does China's Social Credit System Work?
Quote

Everyone thinks of China's social credit system as some sort of Black Mirror episode, while others compare it the the FICO score in the USA. It's much, much more than that. In fact, I found the documents that highlight how it works, and how it affects the people of China. Not only that, but I have spent a lot of time in the first city it was implemented.
Keep in mind, this is how the social credit system in China works, but it hasn't been implemented nationwide yet, only in selected areas.

The virtual universe will contain something like this one. But there will ve much more to be integrated under the unified system. Security and accountability will be integral parts of the system, including those who are in power.

Here's another video on the same subject.
What Life Under China's Social Credit System Could Be Like
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/12/2021 22:26:44
https://www.techrepublic.com/article/digital-twins-are-finally-becoming-a-reality-is-your-company-ready-to-use-them/#ftag=RSS-03-10aaa0b

Quote
Digital twins were once a technology of the future. Now companies are lining up to implement them so they can solve real-world problems with virtual simulations. Is it easier said than done?

Futurist Bernard Marr described a digital twin as "an exact digital replica of something in the physical world; digital twins are made possible thanks to Internet of Things sensors that gather data from the physical world and send it to machines to reconstruct." Unstructured data, such as IoT technology, have made digital twins possible—and these digital twins are able to solve real-world problems in virtual universes.

An example Marr offered is the city of Singapore, which does most of its city planning by using a virtual replica of its physical city. In another example, a supermarket in France created a digital twin of a brick-and-mortar store based on data from IoT-enabled shelves and sales systems. The result is that store managers can easily manage inventory and test the effectiveness of different store layouts in digital twin simulations.

Digital twins can be impressive, but It isn't easy to build one. Each twin is a vast complex of data drawn from IT assets throughout and outside of the enterprise. This data is then applied to an operational digital twin model developed by IT and operations specialists.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2021 14:26:28
Next-Gen Graphics FINALLY Arrive [Unreal Engine 5]
Quote
This is the moment I've been waiting for in computing graphics. In this episode, we cover the playable matrix awakens demo as well as some other unreal engine 5 info.
With this new engine, virtual universe can be projected to 2D screen indistinguishable from real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/12/2021 07:16:41
https://www.newscientist.com/article/2301500-human-brain-cells-in-a-dish-learn-to-play-pong-faster-than-an-ai/

Quote
Human brain cells in a dish learn to play Pong faster than an AI
Hundreds of thousands of brain cells in a dish are being taught to play Pong by responding to pulses of electricity – and can improve their performance more quickly than an AI can.

Living brain cells in a dish can learn to play the video game Pong when they are placed in what researchers describe as a “virtual game world”. “We think it’s fair to call them cyborg brains,” says Brett Kagan, chief scientific officer of Cortical Labs, who leads the research.

Many teams around the world have been studying networks of neurons in dishes, often growing them into brain-like organoids. But this is the first time that mini-brains have been found to perform goal-directed tasks, says Kagan.



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/12/2021 05:20:59
NVIDIA’s New AI: Journey Into Virtual Reality!
The paper "Physics-based Human Motion Estimation and Synthesis from Videos" is available here:
https://nv-tlabs.github.io/physics-pose-estimation-project-page/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/12/2021 21:18:33
Quote
Summary: Researchers have identified a neural mechanism that supports advanced cognitive functions such as planning and problem-solving. The mechanism distributes information from a single neuron to larger neural populations in the prefrontal cortex.

Source: Mount Sinai Hosptial
Quote
Mount Sinai scientists have discovered a neural mechanism that is believed to support advanced cognitive abilities such as planning and problem-solving. It does so by distributing information from single neurons to larger populations of neurons in the prefrontal cortex, the area of the brain that temporarily stores and manipulates information.
The study shows improvement in reliability of information processing capability by distributing the load.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/01/2022 04:22:50
https://interestingengineering.com/a-62-year-old-paralyzed-man-sent-out-his-first-tweet-with-brain-chip
Quote
A 62-year-old Australian man paralyzed following his diagnosis with amyotrophic lateral sclerosis (ALS) has become the first individual to send out a message on social media using a brain-computer interface, RT reported.

Brain-computer interfaces (BCI) are the next big thing in technology. While some people like Elon Musk want to use it to enhance human experiences as early as next year, others such as Synchron, whose interface helped Australian Philip O'Keefe send out his first tweet, want to develop it as a prosthesis for paralysis and treat other neurological diseases such as Parkinson's disease in the future, the company said in a press release.

Synchron's BCI works through its brain implant called Stentrode that does not require any brain surgery to be installed. Instead, the company leverages the intentional techniques that are commonly used to treat stroke to implant the Stentrode via the jugular vein, the press release said.

https://twitter.com/tomoxl/status/1473809025254846467?s=20
Brain-computer interfaces (BCI) are the next big thing in technology. It forms the bridge between natural and artificial intelligence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/01/2022 09:54:08
https://interestingengineering.com/yes-theres-really-a-neural-interface-at-ces-that-reads-your-brain-signals
Quote
Imagine commanding a computer or playing a game without using your fingers, voice, or eyes. It sounds like science fiction, but it’s becoming a little more real every day thanks to a handful of companies making tech that detects neural activity and converts those measurements into signals computers can read.

One of those companies — NextMind — has been shipping its version of the mind-reading technology to developers for over a year. First unveiled at CES in Las Vegas, the company’s neural interface is a black circle that can read brain waves when strapped to the back of a user’s head. The device isn’t quite yet ready for primetime, but it’s bound to make its way into consumer goods sooner rather than later.

Neural interfaces are already here
Neural interfaces have the potential to support a wide range of activities in a variety of settings. A company called Mudra, for example, has developed a band for the Apple Watch that enables users to interact with the device by simply moving their fingers — or think about moving their fingers. That means someone with the device can navigate music or place calls without having to interrupt whatever they’re doing at the time. It also opens tremendous opportunities for making tech available to people with disabilities who have trouble with other user interfaces.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/01/2022 02:36:42
https://spectrum.ieee.org/ai-failures
Quote
ARTIFICIAL INTELLIGENCE could perform more quickly, accurately, reliably, and impartially than humans on a wide range of problems, from detecting cancer to deciding who receives an interview for a job. But AIs have also suffered numerous, sometimes deadly, failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people.

Here are seven examples of AI failures and what current weaknesses they reveal about artificial intelligence. Scientists discuss possible ways to deal with some of these problems; others currently defy explanation or may, philosophically speaking, lack any conclusive solution altogether.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/01/2022 13:28:14
Quote
On November 29, CNN reported that scientists claimed the world's first living robots were now able to reproduce. But what sounds like the start of a dystopian nightmare future turns out to be a lot less worrying at a closer look.

Article with more information on Xenobots here:
https://www.pnas.org/content/118/49/e2112672118

Living Robots - How to Program Self-replicating Organisms
Title: Re: How close are we from building a virtual universe?
Post by: Origin on 17/01/2022 15:04:49
Your never ending threads become so tiring.  Blessed be the thread ignore button... :)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/01/2022 22:28:59
Your never ending threads become so tiring.  Blessed be the thread ignore button... :)
Actions speak louder than words. You said that you will ignore my threads repeatedly. Your post here shows your failure to do so.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2022 21:02:39
https://electrek.co/2022/01/19/elon-musk-tesla-artificial-general-intelligence-decentralize-tesla-bot-avoid-terminator-scenario/
Quote
For a few years now, Musk has been pushing the idea that Tesla is the world’s leading company when it comes to real-world applications of artificial intelligence.

He describes Tesla’s fleet of vehicles equipped with sensors and computers for self-driving as “robots on wheels.”

Through this “real-world application,” the company has also been able to attract world-class AI talent, and Musk boasts that Tesla has the best AI team on the planet.

At Tesla’s AI day last year, the automaker unveiled its latest supercomputer, Dojo, to train its neural nets.

It also announced that it plans to build a ‘Tesla Bot,’ a humanoid robot meant to do general tasks and repetitive work.

Now Musk took to Twitter this morning to announce that Tesla might go a step further and get involved in Artificial General Intelligence (AGI):

“Tesla AI might play a role in AGI, given that it trains against the outside world, especially with the advent of Optimus.”

Optimus, or Optimus Subprime, is the codename that Musk gave to the Tesla Bot project.

This is somewhat surprising considering the many warnings that Musk has issued about creating AGI and the risks to humanity that come with it.

Along with the announcement that Tesla might work on AGI, Musk also added on Twitter that Tesla will make sure to “decentralize” control of Tesla Bots:

“Will do our best. Decentralized control of the robots will be critical.”

The comment was made in response to someone mentioning “summoning the demon,” which is what Musk referred to as creating an AGI that would turn against humanity.

Decentralizing the control of Tesla Bots would avoid giving this “demon” access to an army – much like a Terminator-like scenario.
I saw this as inevitable route to technological singularity. If Elon refuses to go there, someone else will. Anyone who eventually succeed in building AGI should be at least informed about the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2022 21:36:18
As I've mentioned before, this thread is a spin off from my other thread about universal terminal goal.
https://www.thenakedscientists.com/forum/index.php?topic=71347.0
Based on the title, my posts here tend to be newsy, while the other threads are more conceptual.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/01/2022 10:25:46
https://www.wired.com/story/metalenz-polareyes-polarization-camera/
Quote
Smartphone Cameras Might Soon Capture Polarization Data
Normal cameras can process color and light. New tech from Metalenz collects information that could help your phone better understand the world around you.
IMAGINE A CAMERA that's mounted on your car being able to identify black ice on the road, giving you a heads-up before you drive over it. Or a cell phone camera that can tell whether a lesion on your skin is possibly cancerous. Or the ability for Face ID to work even when you have a face mask on. These are all possibilities Metalenz is touting with its new PolarEyes polarization technology.
Normal cameras imitate human eyes, which capture incoming light in different sensitivities for different frequencies. Some useful features of light are lost, such as polarization. This new technology can improve our data acquisition from our environment.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2022 05:31:55
https://neurosciencenews.com/brain-body-maps-19948/
Quote
Reinterpreting Our Brain’s Body Maps
Summary: The body relies on multiple maps based on the choice of the motor system.

Our brain maps out our body to facilitate accurate motor control; disorders of this body map result in motor deficits. For a century, the body map has been thought to have applied to all types of motor actions. Yet scientists have begun to query how the body map operates when executing different motor actions, such as moving your eyes and hands together.
Body schema in the brain is a naturally occurring virtual universe. Studying it can contribute to the building of more integrated virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/01/2022 05:09:12
https://www.quantamagazine.org/how-infinite-series-reveal-the-unity-of-mathematics-20220124/
Quote
How Infinite Series Reveal the Unity of Mathematics
Infinite sums are among the most underrated yet powerful concepts in mathematics, capable of linking concepts across math’s vast web.
Quote
When I was a boy, my dad told me that math is like a tower. One thing builds on the next. Addition builds on numbers. Subtraction builds on addition. And on it goes, ascending through algebra, geometry, trigonometry and calculus, all the way up to “higher math” — an appropriate name for a soaring edifice.

But once I learned about infinite series, I could no longer see math as a tower. Nor is it a tree, as another metaphor would have it. Its different parts are not branches that split off and go their separate ways. No — math is a web. All its parts connect to and support each other. No part of math is split off from the rest. It’s a network, a bit like a nervous system — or, better yet, a brain.

"Mathematics is the language of Science." - Galileo Galilei

Research in AI and progress to AGI seem to converge to language model, such as GPT3, which is effectively processed by brain-like structure, which is a form of deep neural network. That's the structure of virtual universe that we will build as a tool to achieve the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/02/2022 03:57:51
The virtual universe that we are going to build should serve as an instrumental goal towards the universal terminal goal. It must aim for relevance, accuracy, and precision, in that particular order of importance.
Imagine if a billionaire decides to build a supercomputer to calculate the value of π in as many decimal places as possible, and ends up using more than half of computational power and memory space of the world. This endeavor might have high score in accuracy and precision criteria, but less so in relevance to achieving the universal terminal goal.
This prioritization should be kept in mind by anyone trying to build a metaverse, or their own version of virtual universe.
https://www.cnbctv18.com/photos/technology/metaverse-innovations-a-glimpse-of-what-the-virtual-universe-could-look-like-in-future-12242842.htm
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/02/2022 07:16:31
Here's another step closer to a virtual universe.

MIT Technology Review (@techreview) tweeted at 4:09 PM on Fri, Feb 04, 2022:
First protein folding, now weather forecasting: DeepMind’s artificial intelligence predicts almost exactly when and where it’s going to rain.
https://t.co/7E8LWmlxNz
(https://twitter.com/techreview/status/1489526402772877312?t=wuOnTlYu0hb_YQXfSfFHgw&s=03)

https://www.technologyreview.com/2021/09/29/1036331/deepminds-ai-predicts-almost-exactly-when-and-where-its-going-to-rain/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/02/2022 07:18:29
https://techcrunch.com/2022/02/02/deepminds-alphacode-ai-writes-code-at-a-competitive-level/
Quote
DeepMind has created an AI capable of writing code to solve arbitrary problems posed to it, as proven by participating in a coding challenge and placing — well, somewhere in the middle. It won’t be taking any software engineers’ jobs just yet, but it’s promising and may help automate basic tasks.

The team at DeepMind, a subsidiary of Alphabet, is aiming to create intelligence in as many forms as it can, and of course these days the task to which many of our great minds are bent is coding. Code is a fusion of language, logic and problem-solving that is both a natural fit for a computer’s capabilities and a tough one to crack.

Of course it isn’t the first to attempt something like this: OpenAI has its own Codex natural-language coding project, and it powers both GitHub Copilot and a test from Microsoft to let GPT-3 finish your lines.


DeepMind’s paper throws a little friendly shade on the competition in describing why it is going after the domain of competitive coding:

Quote
Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code.

OpenAI may have something to say about that (and we can probably expect a riposte in its next paper on these lines), but as the researchers go on to point out, competitive programming problems generally involve a combination of interpretation and ingenuity that isn’t really on display in existing code AIs.

To take on the domain, DeepMind trained a new model using selected GitHub libraries and a collection of coding problems and their solutions. Simply said, but not a trivial build. When it was complete, they put it to work on 10 recent (and needless to say, unseen by the AI) contests from Codeforces, which hosts this kind of competition.
You can dive deeper into the way AlphaCode was built, and its solutions to various problems, at this demo site.
https://alphacode.deepmind.com/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/02/2022 06:14:06

Quote
Movies. Video games. YouTube videos. All of them work because we accidentally figured out a way to fool your brain’s visual processing system, and you don’t even know it’s happening. In this video, I talk to neuroscientist David Eagleman about the secret illusions that make the moving picture possible.

Here's an interesting video on how our brains work. IMO, it shows that a brain creates a virtual universe, which turns out to be useful for our survival thus far. Our ancestors had faced various existential threats and survived from major mass extinction events. Our descendants will also face existential threats in the future. To survive from another major mass extinction event, and to pass the great filter, we will need a better virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/02/2022 09:43:27
It seems that the distinction between artificial and natural intelligence is getting blurred. IMO they will form endosymbiotic systems just like what happened to eukaryotes with mitochondria and other organelles.

Quote
https://spectrum.ieee.org/neuromorphic-computing-ai-device
Reconfigurable AI Device Shows Brainlike Promise

An adaptable new device can transform into all the key electric components needed for artificial-intelligence hardware, for potential use in robotics and autonomous systems, a new study finds.

Brain-inspired or "neuromorphic" computer hardware aims to mimic the human brain's exceptional ability to adaptively learn from experience and rapidly process information in an extraordinarily energy-efficient manner. These features of the brain are due in large part to its plastic nature—its ability to evolve its structure and function over time through activity such as neuron formation or "neurogenesis."
 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/02/2022 04:16:58
Quote
https://futurism.com/the-byte/openai-already-sentient
OPENAI CHIEF SCIENTIST SAYS ADVANCED AI MAY ALREADY BE CONSCIOUS
"IT MAY BE THAT TODAY'S LARGE NEURAL NETWORKS ARE SLIGHTLY CONSCIOUS."

OpenAI’s top researcher has made a startling claim this week: that artificial intelligence may already be gaining consciousness.

Ilya Sutskever, chief scientist of the OpenAI research group, tweeted today that “it may be that today’s large neural networks are slightly conscious.”

Needless to say, that’s an unusual point of view. The widely accepted idea among AI researchers is that the tech has made great strides over the past decade, but still falls far short of human intelligence, nevermind being anywhere close to experiencing the world consciously.

It’s possible that Sutskever was speaking facetiously, but it’s also conceivable that as the top researcher at one of the foremost AI groups in the world, he’s already looking downrange.


Quote
https://futurism.com/mit-researcher-conscious-ai
MIT Researcher Says Yes, Advanced Neural Networks May Be Achieving Consciousness
This debate just keeps getting spicier.
Amid a maelstrom set off by a prominent AI researcher saying that some AI may already be achieving limited consciousness, one MIT AI researcher is saying the concept might not be so far-fetched.

Our story starts with Ilya Sutskever, head scientist at the Elon Musk cofounded research group OpenAI. On February 9, Sutskever tweeted that “it may be that today’s large neural networks are slightly conscious.”

In response, many others in the AI research space decried the OpenAI scientist’s claim, suggesting that it was harming machine learning’s reputation and amounted to little more than a “sales pitch” for OpenAI work.

That backlash has now generated its own clapback from MIT computer scientist Tamay Besiroglu, who’s now bucking the trend by coming to Sutskever’s defense.
“Seeing so many prominent [machine learning] folks ridiculing this idea is disappointing,” Besiroglu tweeted. “It makes me less hopeful in the field’s ability to seriously take on some of the profound, weird and important questions that they’ll undoubtedly be faced with over the next few decades.”

Besiroglu also pointed to a preprint study in which he and some collaborators found that machine learning models have roughly doubled in intelligence every six months since 2010.

Strikingly, Besiroglu drew a line on the on chart of the progress at which, he said, the models may have become “maybe slightly conscious.”
(https://futurism.com/_next/image?url=https%3A%2F%2Fwp-assets.futurism.com%2F2022%2F02%2FFLqfiKoXIAc8P0A-scaled.jpeg&w=1920&q=75)
IMO, projecting consciousness in a single parameter, namely training compute(FLOPs) is an oversimplification. Most of us agree that toddlers are conscious beings. They don't have supercomputer level intelligence yet.

Quote
https://futurism.com/human-level-artificial-intelligence-agi
When Will We Have Artificial Intelligence As Smart as a Human? Here’s What Experts Think
Robots in the movies can think creatively, continue learning over time, and maybe even pass for conscious. Why don't we have that yet?

At The Joint Multi-Conference on Human-Level Artificial Intelligence held last month in Prague, AI experts and thought leaders from around the world shared their goals, hopes, and progress towards human-level AI (HLAI), which is the last stop before true AGI or the same thing, depending on who you ask.
Either way, most experts think it’s coming — sooner rather than later. In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of respondents think people will create HLAI within 10 years. Another 28 percent think it will take 20 years. Just two percent think HLAI will never exist.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/02/2022 05:33:28

Microsoft’s AI Understands Humans…But It Had Never Seen One!
The paper "Fake It Till You Make It - Face analysis in the wild using synthetic data alone " is available here:
https://microsoft.github.io/FaceSynthetics/
Quote
Abstract
We demonstrate that it is possible to perform face-related computer vision in the wild using synthetic data alone.

The community has long enjoyed the benefits of synthesizing training data with graphics, but the domain gap between real and synthetic data has remained a problem, especially for human faces. Researchers have tried to bridge this gap with data mixing, domain adaptation, and domain-adversarial training, but we show that it is possible to synthesize data with minimal domain gap, so that models trained on synthetic data generalize to real in-the-wild datasets.

We describe how to combine a procedurally-generated parametric 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism and diversity. We train machine learning systems for face-related tasks such as landmark localization and face parsing, showing that synthetic data can both match real data in accuracy as well as open up new approaches where manual labelling would be impossible.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/02/2022 02:24:26
Quote
https://www.protocol.com/enterprise/metaverse-zuckerberg-computing-infrastructure
Mark Zuckerberg’s metaverse will require computing tech no one knows how to build
To achieve anything close to what metaverse boosters promise, experts believe nearly every kind of chip will have to be an order of magnitude more powerful than it is today.
The technology necessary to power the metaverse doesn’t exist.

It will not exist next year. It will not exist in 2026. The technology might not exist in 2032, though it’s likely we will have a few ideas as to how we might eventually design and manufacture chips that could turn Mark Zuckerberg’s fever dreams into reality by then.

Over the past six months, a disconnect has formed between the way corporate America is talking about the dawning concept of the metaverse and its plausibility, based on the nature of the computing power that will be necessary to achieve it. To get there will require immense innovation, similar to the multi-decade effort to shrink personal computers to the size of an iPhone.

Microsoft hyped its $68.7 billion bid for Activision Blizzard last month as a metaverse play. In October, Facebook transformed its entire corporate identity to revolve around the metaverse. Last year, Disney even promised to build its own version of the metaverse to “allow storytelling without boundaries.”

Quote
Zuckerberg’s explanation of what the metaverse will ultimately look like is vague, but includes some of the tropes its boosters roughly agree on: He called it “[an] embodied internet that you’re inside of rather than just looking at” that would offer everything you can already do online and “some things that don’t make sense on the internet today, like dancing.”
Quote
If the metaverse sounds vague, that’s because it is. That description could mutate over time to apply to lots of things that might eventually happen in technology. And arguably, something like the metaverse might eventually already exist in an early form produced by video game companies.

Roblox and Epic Games’ Fortnite play host to millions — albeit in virtually separated groups of a few hundred people — viewing live concerts online. Microsoft Flight Simulator has created a 2.5 petabyte virtual replica of the world that is updated in real time with flight and weather data.

But even today’s most complex metaverse-like video games require a tiny fraction of the processing and networking performance we would need to achieve the vision of a persistent world accessed by billions of people, all at once, across multiple devices, screen formats and in virtual or augmented reality.
This effort will surely needs a lot of resources to be allocated. We need to make sure to allocate them effectively and efficiently to help achieving the universal terminal goal, instead of hindering it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/02/2022 05:11:31
Quote
https://futurism.com/the-byte/ai-faces-trustworthy
SCIENTISTS WARN THAT NEW AI-GENERATED FACES ARE SEEN AS MORE TRUSTWORTHY THAN REAL ONES
byTONY TRAN

As if the possibility that AI might already be conscious wasn’t creepy enough, researchers have announced that AI-generated faces have become so sophisticated that many people think they’re more trustworthy than actual humans.

A pair of researchers discovered that a neural network dubbed StyleGAN2 is capable of creating faces indistinguishable from the real thing, according to a press release from Lancaster University. In fact, in a jarring twist, participants seemed to find AI-generated faces more trustworthy than the faces of actual people.

“Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces,” the researchers, who will be publishing a paper of their findings in the journal PNAS, said in the release. 
This progress further highlights the need to built integrated virtual universe as well as the acknowledgement of universal terminal goal among AI developers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2022 05:27:54
https://www.technologyreview.com/2020/10/16/1010566/ai-machine-learning-with-tiny-data/
Quote
Machine learning typically requires tons of examples. To get an AI model to recognize a horse, you need to show it thousands of images of horses. This is what makes the technology computationally expensive—and very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life.

In fact, children sometimes don’t need any examples to identify something. Shown photos of a horse and a rhino, and told a unicorn is something in between, they can recognize the mythical creature in a picture book the first time they see it.

Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call “less than one”-shot, or LO-shot, learning. In other words, an AI model should be able to accurately recognize more objects than the number of examples it was trained on. That could be a big deal for a field that has grown increasingly expensive and inaccessible as the data sets used become ever larger.

How “less than one”-shot learning works
The researchers first demonstrated this idea while experimenting with the popular computer-vision data set known as MNIST. MNIST, which contains 60,000 training images of handwritten digits from 0 to 9, is often used to test out new ideas in the field.

In a previous paper, MIT researchers had introduced a technique to “distill” giant data sets into tiny ones, and as a proof of concept, they had compressed MNIST down to only 10 images. The images weren’t selected from the original data set but carefully engineered and optimized to contain an equivalent amount of information to the full set. As a result, when trained exclusively on the 10 images, an AI model could achieve nearly the same accuracy as one trained on all MNIST’s images.
This research supports the conclusion that learning is a kind of data compression process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2022 11:57:54
https://www.quantamagazine.org/scientists-watch-a-memory-form-in-a-living-brain-20220303/
Quote
Researchers have now directly observed what happens inside a brain learning that kind of emotionally charged response. In a new study published in January in the Proceedings of the National Academy of Sciences, a team at the University of Southern California was able to visualize memories forming in the brains of laboratory fish, imaging them under the microscope as they bloomed in beautiful fluorescent greens. From earlier work, they had expected the brain to encode the memory by slightly tweaking its neural architecture. Instead, the researchers were surprised to find a major overhaul in the connections.

What they saw reinforces the view that memory is a complex phenomenon involving a hodgepodge of encoding pathways. But it further suggests that the type of memory may be critical to how the brain chooses to encode it — a conclusion that may hint at why some kinds of deeply conditioned traumatic responses are so persistent, and so hard to unlearn.
By understanding how naturally occurring memory works, hopefully we can create a more accurate virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2022 12:02:19
https://www.quantamagazine.org/new-map-of-meaning-in-the-brain-changes-ideas-about-memory-20220208/
Quote
Researchers have mapped hundreds of semantic categories to the tiny bits of the cortex that represent them in our thoughts and perceptions. What they discovered might change our view of memory.

In 2016, neuroscientists mapped how pea-size regions of the cortex respond to hundreds of semantic concepts. They’re now building on that work to understand the relationship between visual, linguistic and memory representations in the brain.

A team of neuroscientists created a semantic map of the brain that showed in remarkable detail which areas of the cortex respond to linguistic information about a wide range of concepts, from faces and places to social relationships and weather phenomena. When they compared that map to one they made showing where the brain represents categories of visual information, they observed meaningful differences between the patterns.

And those differences looked exactly like the ones reported in the studies on vision and memory.

The finding, published last October in Nature Neuroscience, suggests that in many cases, a memory isn’t a facsimile of past perceptions that gets replayed. Instead, it is more like a reconstruction of the original experience, based on its semantic content.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/03/2022 03:01:25
Quote
https://ai.googleblog.com/2021/09/toward-fast-and-accurate-neural.html
As neural network models and training data size grow, training efficiency is becoming an important focus for deep learning. For example, GPT-3 demonstrates remarkable capability in few-shot learning, but it requires weeks of training with thousands of GPUs, making it difficult to retrain or improve. What if, instead, one could design neural networks that were smaller and faster, yet still more accurate?

In this post, we introduce two families of models for image recognition that leverage neural architecture search, and a principled design methodology based on model capacity and generalization. The first is EfficientNetV2 (accepted at ICML 2021), which consists of convolutional neural networks that aim for fast training speed for relatively small-scale datasets, such as ImageNet1k (with 1.28 million images). The second family is CoAtNet, which are hybrid models that combine convolution and self-attention, with the goal of achieving higher accuracy on large-scale datasets, such as ImageNet21 (with 13 million images) and JFT (with billions of images). Compared to previous results, our models are 4-10x faster while achieving new state-of-the-art 90.88% top-1 accuracy on the well-established ImageNet dataset. We are also releasing the source code and pretrained models on the Google AutoML github.

(https://1.bp.blogspot.com/-q91X4NZ2yPU/YUNkJZqp9sI/AAAAAAAAIII/FnGHKxE_we8nDWL5ZyHU8m_3iU9nJABLwCLcBGAsYHQ/w640-h476/image2%2B%25282%2529.jpg)

We observe two key insights from our study: (1) depthwise convolution and self-attention can be naturally unified via simple relative attention, and (2) vertically stacking convolution layers and attention layers in a way that considers their capacity and computation required in each stage (resolution) is surprisingly effective in improving generalization, capacity and efficiency. Based on these insights, we have developed a family of hybrid models with both convolution and attention, named CoAtNets (pronounced “coat” nets). The following figure shows the overall CoAtNet network architecture:
(https://1.bp.blogspot.com/-02ISPtZErSM/YUNkdNiivNI/AAAAAAAAIIU/krCTzTmwp8gy5RvwEMnF-ndvhCXvnmMUwCLcBGAsYHQ/w640-h70/image1.jpg)
Overall CoAtNet architecture. Given an input image with size HxW, we first apply convolutions in the first stem stage (S0) and reduce the size to H/2 x W/2. The size continues to reduce with each stage. Ln refers to the number of layers. Then, the early two stages (S1 and S2) mainly adopt MBConv building blocks consisting of depthwise convolution. The later two stages (S3 and S4) mainly adopt Transformer blocks with relative self-attention. Unlike the previous Transformer blocks in ViT, here we use pooling between stages, similar to Funnel Transformer. Finally, we apply a classification head to generate class prediction.


Conclusion and Future Work
In this post, we introduce two families of neural networks, named EfficientNetV2 and CoAtNet, which achieve state-of-the-art performance on image recognition. All EfficientNetV2 models are open sourced and the pretrained models are also available on the TFhub. CoAtNet models will also be open-sourced soon. We hope these new neural networks can benefit the research community and the industry. In the future we plan to further optimize these models and apply them to new tasks, such as zero-shot learning and self-supervised learning, which often require fast models with high capacity.
Those neural network models are basically memes living in computers of AI researchers. They compete for their own existence. The quoted article emphasizes the importance of efficiency, which is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/03/2022 10:25:55
https://www.raphkoster.com/2021/09/23/how-virtual-worlds-work-part-one/
Quote
Every browser knows how to load a .jpg, a .gif, a .png, and more. The formats the data exists in are agreed upon. If you point a browser at some data in a format it doesn’t understand, it’s going to fail to load and draw the image, just like you can’t expect Instagram to know how to display an .stl meant for 3d printing.

This is a crucial concept, which is going to come up again and again in these articles: data doesn’t exist in isolation. A vinyl record and a CD might both have the same music on them, but a record player can’t handle a CD and a vinyl record doesn’t fit into the slot on a CD player (don’t try, you will regret it).

Anytime you see data, you need to think of three things: the actual content, the format it is in, and the “machine” that can recognize that format. You can think of the format as the “rules” the data needs to follow in order for the machine to read it.
The thing about formats is that they need to be standardized. They’re agreed upon by committees, usually. And committees are slow and political… and of course, different members might have very different opinions on what needs to be in the standard – and for good reasons!

One of the common daydreams for metaverses is that a player should be able to take their avatar from one world to another. But… what format avatar? A Nintendo Mii and a Facebook profile picture and an EVE Online character and a Final Fantasy XIV character don’t just look different. They are different. FFXIV and World of Warcraft are fairly similar games in a lot of ways, but the list of equipment slots, possible customizations, and so on are hugely different. These games cannot load each other’s characters because they do not agree on what a character is.
To operate in a virtual universe, there must be some standards on how objects must be defined. At least, some form of mapping would be needed to convert objects from one system to another.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/03/2022 05:37:34
Why Tesla's AUTO PARKING Matters--and how it works!
Quote
At Tesla's AI day, Ashok Elluswamy, Director of Autopilot Software, went into detail about the problems that Tesla has getting a car to navigate a parking lot and find an open parking space. Why is this so difficult? What does it have to do with computer vision? And why is auto parking, auto summon, and reverse summon so critical to Tesla's robotaxi ambitions?!
To solve the problem, Tesla cars need to build a local virtual universe which is relevant to the locations they must go through. For intercity, interstate, or even international taxi driving or cargo trucking, the scope of their virtual universe must be expanded.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2022 12:54:48
Our sensors provide raw data. But they are useless until we add context, meaning, and insight.
Quote
https://otec.uoregon.edu/data-wisdom.htm
Computers are often called data processing machines or information processing machines. People understand and accept the fact that computers are machines designed for the input, storage, processing, and output of data and information

However, some people also think of computers as knowledge processing machines and even explore what it might mean for a computer to have wisdom. For example, here is a quote from Dr. Yogesh Malhotra of the BRINT Institute:

Knowledge Management caters to the critical issues of organizational adaption, survival and competence in face of increasingly discontinuous environmental change.... Essentially, it embodies organizational processes that seek synergistic combination of data and information processing capacity of information technologies, and the creative and innovative capacity of human beings.
The following quotation is from the Atlantic Canada Conservation Data Centre, a non-profit organization established in 1999.

Individual bits or "bytes" of "raw" biological data (e.g. the number of individual plants of a given species at a given location) do not by themselves inform the human mind. However, drawing various data together within an appropriate context yields information that may be useful (e.g. the distribution and abundance of the plant species at various points in space and time). In turn, this information helps foster the quality of knowing (e.g. whether the plant species is increasing or decreasing in distribution and abundance over space and time). Knowledge and experience blend to become wisdom--the power of applying these attributes critically or practically to make decisions.
Thus, we are led to think about Data, Information, Knowledge, and Wisdom as we explore the capabilities and limitations of IT systems

Some pictures below may help understanding the difference among those concepts.

(https://www.researchgate.net/publication/332400827/figure/fig6/AS:747208399912965@1555159773957/The-data-information-knowledge-wisdom-DIKW-hierarchy-as-a-pyramid-to-manage-knowledge.ppm)

(https://www.i-scoop.eu/wp-content/uploads/2016/07/The-traditional-data-information-knowledge-wisdom-pyramid-source-Mushon.gif.webp)

(https://www.i-scoop.eu/wp-content/uploads/2016/07/DIKW-through-the-eyes-of-IoT-company-AGT-as-mentioned-on-Electronics-360.gif.webp)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2022 13:36:31
Here are some other diagrams.
(https://www.researchgate.net/profile/Bl-Wong/publication/272242493/figure/fig2/AS:339539152392194@1457963850280/Two-Perspectives-on-Data-Information-Knowledge-Wisdom-DIKW-In-practice-of-course.png)

(https://www.slideteam.net/media/catalog/product/cache/1280x720/d/a/data_information_knowledge_wisdom_structure_with_future_and_past_context_slide01.jpg)

(https://www.researchgate.net/publication/334677207/figure/fig1/AS:784544584200197@1564061413587/DIKW-pyramid-data-to-wisdom-flow-of-knowledge-and-information-self-creation.jpg)

(https://www.researchgate.net/profile/Julio-Facelli-2/publication/271703671/figure/fig2/AS:667693057339399@1536201838406/The-DIKW-Data-Information-Knowledge-Wisdom-pyramid-ALT-alanine-aminotransferase-test.ppm)

(https://www.thinknpc.org/wp-content/uploads/2017/08/Data-information-knowledge-wisdom-model.jpg)

In the end, the need for building a virtual universe is to gain wisdom to help achieving the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/04/2022 11:12:00
Quote
https://www.science.org/doi/10.1126/science.abj5089
Epigenetic patterns in a complete human genome
Abstract
The completion of a telomere-to-telomere human reference genome, T2T-CHM13, has resolved complex regions of the genome, including repetitive and homologous regions. Here, we present a high-resolution epigenetic study of previously unresolved sequences, representing entire acrocentric chromosome short arms, gene family expansions, and a diverse collection of repeat classes. This resource precisely maps CpG methylation (32.28 million CpGs), DNA accessibility, and short-read datasets (166,058 previously unresolved chromatin immunoprecipitation sequencing peaks) to provide evidence of activity across previously unidentified or corrected genes and reveals clinically relevant paralog-specific regulation. Probing CpG methylation across human centromeres from six diverse individuals generated an estimate of variability in kinetochore localization. This analysis provides a framework with which to investigate the most elusive regions of the human genome, granting insights into epigenetic regulation.
It's a step closer to precise genetic engineering. Is there a limit which we shouldn't pass through?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2022 07:28:05
Here's another impressive progress in AI.
Quote
OpenAI (@OpenAI) tweeted at 9:07 PM on Wed, Apr 06, 2022:
Our newest system DALL·E 2 can create realistic images and art from a description in natural language. See it here: https://t.co/Kmjko82YO5 https://t.co/QEh9kWUE8A
(https://twitter.com/OpenAI/status/1511707245536428034?t=u1xywMQQXbQTgV4AM_ceHA&s=03)

Quote
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language.

DALL·E 2 has learned the relationship between images and the text used to describe them. It uses a process called “diffusion,” which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2022 07:43:33
The Bitter Lesson in AI researches.
Quote
http://www.incompleteideas.net/IncIdeas/BitterLesson.html?s=03

The Bitter Lesson
Rich Sutton
March 13, 2019
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.  There were many examples of AI researchers' belated learning of this bitter lesson, and it is instructive to review some of the most prominent.

The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/04/2022 09:06:53
Here's another impressive progress in AI.
Here's the video.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2022 13:33:55
8 Illusions That Explain How You Create Reality


Quote
Optical illusions are fun, but they can also teach us a lot about how our brains work. In particular, how our brains accomplish the incredible feat of constructing a three-dimensional reality using nothing but 2-D images from our eyes. A young artist and psychology researcher named Adelbert Ames, Jr. developed a series of illusions that help us understand how this process of constructing reality actually works. Sometimes we need to be fooled in order to gain understanding.

Unconsciously, we built virtual universes in our brains.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/05/2022 08:41:14
The Bitter Lesson in AI researches.
Some other AI researchers don't seem to agree with the conclusion above, such as:
Quote
https://nautil.us/deep-learning-is-hitting-a-wall-14467/

In November 2020, Hinton told MIT Technology Review that “deep learning is going to be able to do everything.”4

I seriously doubt it. In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.”5  Turning the tide, and getting to AI we can really trust, ain’t going to be easy.

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.
Quote
What should we do about it? One option, currently trendy, might be just to gather more data. Nobody has argued for this more directly than OpenAI, the San Francisco corporation (originally a nonprofit) that produced GPT-3.

In 2020, Jared Kaplan and his collaborators at OpenAI suggested that there was a set of “scaling laws” for neural network models of language; they found that the more data they fed into their neural networks, the better those networks performed.10 The implication was that we could do better and better AI if we gather more data and apply deep learning at increasingly large scales. The company’s charismatic CEO Sam Altman wrote a triumphant blog post trumpeting “Moore’s Law for Everything,” claiming that we were just a few years away from “computers that can think,” “read legal documents,” and (echoing IBM Watson) “give medical advice.”

For the first time in 40 years, I finally feel some optimism about AI.

Maybe, but maybe not. There are serious holes in the scaling argument. To begin with, the measures that have scaled have not captured what we desperately need to improve: genuine comprehension. Insiders have long known that one of the biggest problems in AI research is the tests (“benchmarks”) that we use to evaluate AI systems. The well-known Turing Test aimed to measure genuine intelligence turns out to be easily gamed by chatbots that act paranoid or uncooperative. Scaling the measures Kaplan and his OpenAI colleagues looked at—about predicting words in a sentence—is not tantamount to the kind of deep comprehension true AI would require.

What’s more, the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever, much like Moore’s law, a trend in computer chip production that held for decades but arguably began to slow a decade ago.11

Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.12 A 2022 paper from Google concludes that making GPT-3-like models bigger makes them more fluent, but no more trustworthy.13

Such signs should be alarming to the autonomous-driving industry, which has largely banked on scaling, rather than on developing more sophisticated reasoning. If scaling doesn’t get us to safe autonomous driving, tens of billions of dollars of investment in scaling could turn out to be for naught.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/05/2022 04:34:09
Meta's open-source new model OPT is GPT-3's closest competitor!


This is a good news for AI community.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/05/2022 08:34:19
Some other AI researchers don't seem to agree with the conclusion above
Those who insist that humans intervention is necessary to build AGI needs to identify what kind of intervention is required, and why it can't be automated. Also, they seem to assume that current AGI can't restructure the data that they already have based on newer data.
AGI needs to filter out false and bad data, also ability to produce new necessary data by planning and executing some kind of observations, surveys or experiments.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/05/2022 22:23:20
Quote
https://www.zdnet.com/article/microsoft-veteran-bob-muglia-relational-knowledge-graphs-will-transform-business/

We're at the start of a whole new era' with knowledge graphs, says Microsoft veteran Bob Muglia, akin to the arrival of the modern data stack in 2013.


Microsoft veteran Bob Muglia: Relational knowledge graphs will transform business
'We're at the start of a whole new era' with knowledge graphs, says Microsoft veteran Bob Muglia, akin to the arrival of the modern data stack in 2013.


Bob Muglia says twenty years of work on database innovation will bring the relational calculus of E.F. Codd to knowledge graphs, what he calls "relational knowledge graphs," to revolutionize business analysis.

Bob Muglia is something of a bard of databases, capable of unfurling sweeping tales in the evolution of technology.

That is what Muglia, former Microsoft executive and former Snowflake CEO, did Wednesday morning during his keynote address at The Knowledge Graph Conference in New York.

The subject of his talk, "From the Modern Data Stack to Knowledge Graphs," united roughly fifty years of database technology in one new form.

The basic story is this: Five companies have created modern data analytics platforms, Snowflake, Amazon, Databricks, Google, and Azure, but those data analytics platforms can't do business analytics, including, most importantly, representing the rules that underly compliance and governance.

"The industry knows this is a problem," said Muglia. The five platforms, he said, representing "the modern data stack, have allowed a "new generation of these very, very important data apps to be built." However, "When we look at the modern data stack, and we look at what we can do effectively and what we can't do effectively, I would say the number one problem that customers are having with all five of these platforms is governance." 

"So, if you wanted to perform a query to say, 'Hey, tell me all of the resources that Fred Jones has access to in this organization' — that's a hard query to write," he said. "In fact, it's a query that probably can't execute effectively on any modern SQL database if the organization is very large and complex."

The problem, said Muglia, was that the algorithms based off of structured query language, or SQL, can't do such complex "recursive" queries.
He described the problem I faced when I started this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/05/2022 05:16:26
Here's an interesting twitter thread from Yann LeCun, the chief AI scientist at Meta, and one of AI pioneers.
Quote
https://twitter.com/ylecun/status/1526672565233758213?t=ryNVncrigCsgvQqm_oQFUA&s=03

About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:

(0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
1/N

(1) the research community is making *some* progress towards HLAI
(2) scaling up helps. It's necessary but not sufficient, because....
(3) we are still missing some fundamental concepts
2/N

(4) some of those new concepts are possibly "around the corner" (e.g. generalized self-supervised learning)
(5) but we don't know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can't predict how long it's going to take to reach HLAI.
3/N

I really don't think it's just a matter of scaling things up.
We still don't have a learning paradigm that allows machines to learn how the world works, like human anspd many non-human babies do.
4/N

Some may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe "reward is enough".
Yet others believe that explicit symbol manipulation is necessary.
A few don't believe gradient-based learning is part of the solution.
5/N

I believe we need to find new concepts that would allow machines to:
- learn how the world works by observing like babies.
- learn to predict how one can influence the world through taking actions.
6/N

- learn hierarchical representations that allow long-term predictions in abstract spaces.
- properly deal with the fact that the world is not completely predictable.
- enable agents to predict the effects of  sequences of actions so as to be able to reason & plan
7/N

- enable machines to plan hierarchically, decomposing a complex task into subtasks.
- all of this in ways that are compatible with gradient-based learning.

The solution is not just around the corner.
We have a number of obstacles to clear, and we don't know how.
8/N


Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/05/2022 14:50:51
Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
If we expect AI to behave like humans, at least we must give it access to the same data that humans have. An obvious advantage of average humans over current AI is access to interact with objective reality both as input and output. It enables us to infer cause and effect relationships and build models of the universe.