Naked Science Forum

On the Lighter Side => New Theories => Topic started by: hamdani yusuf on 21/09/2019 09:50:36

Title: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2019 09:50:36
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.

Title: Re: How close are we from building a virtual universe?
Post by: jeffreyH on 21/09/2019 11:06:32
How detailed should the virtual universe be? Does it only include the observable universe? Depending upon the detail and scale it could require more information to describe it than the universe actually contains.

A better model would study a well defined region of the universe such as a galaxy cluster. However, this would still depend upon the level of detail.
Title: Re: How close are we from building a virtual universe?
Post by: evan_au on 21/09/2019 22:37:58
There is a definite tradeoff between level of detail, computer power and memory storage.

If you have a goal of studying the general shape of the universe, it is important to have dark matter and normal matter (which clumps into galaxies). But modelling individual stars is not needed.

If you are studying the shape of the galaxy, you don't need to model the lifecycle of the individual stars.

If you are studying the orbits of the planets around the Sun, you don't need to model whether or not Earth hosts life.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/09/2019 04:12:47
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA, social media.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/09/2019 04:23:45
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
Title: Re: How close are we from building a virtual universe?
Post by: jeffreyH on 22/09/2019 13:56:17
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.

You are talking about disparate systems. They are also human centric and not universe centric.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/09/2019 04:34:54
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.

You are talking about disparate systems. They are also human centric and not universe centric.
They are disparate now, but there are already efforts to integrate them. Some ERP systems have been connected to Plant Information Management System, which in turn can be connected to SCADA, DCS, PLC, and even smart field devices, such as transmitter, control valve positioners and variable speed drives.
What we need is a common platform to store those information in the same or compatible format, so any update in one subsystem can be automatically update in related subsystems to guarantee data integrity. The common platform must also take care of user accountability and data accessibility.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2019 11:20:57
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/10/2019 14:20:30
My basic idea for building a virtual universe is representing physical objects as nodes which are then organized in hierarchical structure. It is like a Unix feature, where everything is a file. But here, everything is a node.
To address is-ought problem, another hierarchical structure is created to represent desired/designed conditions.
A relationship table is created to show assignment of physical objects to designed objects. It also saves additional relationship types between them if necessary. Another relationship tables are added to show relationships among nodes other than the main hierarchical structures.
Another hierarchical structure is created to represent activities/events, which are basically any changes of nodes in those hierarchical structures of physical and desired objects. The activity nodes have timestamps for start and finish.
I have built a prototype for this system based on a DCS configuration database, which are then expanded to accomodate other things beyond I/O assignments, physical network, and control strategies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:10:31
The universe as we know it is a dynamic system, which means it changes in time.  So, for a virtual universe to be useful, it also needs to be a dynamic system. Static systems such as paper maps or ancient human's cave painting can only have limited usage for narrow purposes.
Title: Re: How close are we from building a virtual universe?
Post by: Bored chemist on 23/01/2020 07:16:04
There is a virtual universe in your head.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:21:49
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Here is an interesting excerpts from Ray Kurzweil's book "Singularity Is Near" regarding order and complexity, which are closely related to information density.
Quote
Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount of
information required to represent a process. Let's say you have a design for a system (for example, a computer
program or a computer-assisted design file for a computer), which can be described by a data file containing one
million bits. We could say your design has a complexity of one million bits. But suppose we notice that the one million
bits actually consist of a pattern of one thousand bits that is repeated one thousand times. We could note the
repetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducing
the size of the file by a factor of about one thousand.
The most popular data-compression techniques use similar methods of finding redundancy within information.3
But after you've compressed a data file in this way, can you be absolutely certain that there are no other rules or
methods that might be discovered that would enable you to express the file in even more compact terms? For example,
suppose my file was simply "pi" (3.1415...) expressed to one million bits of precision. Most data-compression
programs would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binary
expression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.
But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or that
portion of it) very compactly as "pi to one million bits of accuracy." Since we can never be sure that we have not
overlooked some even more compact representation of an information sequence, any amount of compression sets only
an upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity along
these lines. He defines the "algorithmic information content" (Ale) of a set of information as "the length of the shortest
program that will cause a standard universal computer to print out the string of bits and then halt."4
However, Gell-Mann's concept is not fully adequate. If we have a file with random information, it cannot be
compressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random.
However, if any random sequence will do for a particular design, then this information can be characterized by a
simple instruction, such as "put random sequence of numbers here." So the random sequence, whether it's ten bits or
one billion bits, does not represent a significant amount of complexity, because it is characterized by a simple
instruction. This is the difference between a random sequence and an unpredictable sequence of information that has
purpose.
To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were to
characterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of every atom in the
rock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 1025 atoms which, as I will
discuss in the next chapter, can hold up to 1027 bits of information. That's one hundred million billion times more
information than the genetic code of a human (even without compressing the genetic code).5 But for most common
purposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock for
most purposes with far less information just by specifying its shape and the type of material of which it is made. Thus,
it is reasonable to consider the complexity of an ordinary rock to be far less than that of a human even though the rock
theoretically contains vast amounts of information.6
One concept of complexity is the minimum amount of meaningful, non-random, but unpredictable information
needed to characterize a system or process.
In Gell-Mann's concept, the AlC of a million-bit random string would be about a million bits long. So I am adding
to Gell-Mann's AlC concept the idea of replacing each random string with a simple instruction to "put random bits"
here.
However, even this is not sufficient. Another issue is raised by strings of arbitrary data, such as names and phone
numbers in a phone book, or periodic measurements of radiation levels or temperature. Such data is not random, and
data-compression methods will only succeed in reducing it to a small degree. Yet it does not represent complexity as
that term is generally understood. It is just data. So we need another simple instruction to "put arbitrary data sequence"
here.
To summarize my proposed measure of the complexity of a set of information, we first consider its AlC as Gell-
Mann has defined it. We then replace each random string with a simple instruction to insert a random string. We then
do the same for arbitrary data strings. Now we have a measure of complexity that reasonably matches our intuition.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:25:54
There is a virtual universe in your head.
Indeed, but it only covers a small portion of even the currently observable universe. A lot of information that I had ever known has already lost. In order to be useful for predicting events in the far future, we need a much larger and complex system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2020 08:22:35
Regarding the original question, it turns out that Ray Kurzweil has already predict the answer, which is around mid of this century.

Quote
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[39] Kurzweil believes that the singularity will occur by approximately 2045.[40] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.
https://en.wikipedia.org/wiki/Technological_singularity#Accelerating_change
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/02/2020 11:02:21
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

https://en.wikipedia.org/wiki/Graph_theory
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/02/2020 10:02:07
https://www.technologyreview.com/s/615189/what-ai-still-cant-do/
Quote
Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.

In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.
Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.

Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
Quote
AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.
Judea Pearl
A virtual universe can map commonly known cause and effect relationships to be used as library by AI agents, which will save a lot of time training them from the beginning everytime a new AI agent is assigned.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/03/2020 07:02:22
To achieve generality, an AI is required to adapt to various range of situations. It would be better to have modular structure for frequently used basic functions similar to the configuration of naturally occured brains. It must have some flexibility upon its own hyperparameters, which might require changes for executing different tasks.
To maintain its own integrity, and fight off data corruption or cyber attacks, the AI needs to spend some of its data storage and processing capacity to represent its own structure. This will create some sort of self awareness, which is a step towards artificial consciousness.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/04/2020 09:54:24
The article below has reminded me once again of the importance of having a universal modelling system/platform.
Quote
COBOL, a 60-year-old computer language, is in the COVID-19 spotlight
As state governments seek to fix overwhelmed unemployment benefit systems, they need programmers skilled in a language that was passé by the early 1980s.

Some states have found themselves in need of people who know a 60-year-old programming language called COBOL to retrofit the antiquated government systems now struggling to process the deluge of unemployment claims brought by the coronavirus crisis.

The states of Kansas, New Jersey, and Connecticut all experienced technical meltdowns after a stunning 6.6 million Americans filed for unemployment benefits last week.

They might not have an easy time finding the programmers they need. There just aren’t that many people around these days who know COBOL, or Common Business-Oriented Language. Most universities stopped teaching the language back in the 1980s. COBOL is considered a relic by younger coders.

“There’s really no good reason to learn COBOL today, and there was really no good reason to learn it 20 years ago,” says UCLA computer science professor Peter Reiher. “Most students today wouldn’t have ever even heard of COBOL.”

Meanwhile, because many banks, large companies, and government agencies still use the language in their legacy systems, there’s plenty of demand for COBOL programmers. A search for “COBOL Developer” returned 568 jobs on Indeed.com. COBOL developers make anywhere from $40 to more than $100 per hour.

Kansas governor Laura Kelley said the Kansas Department of Labor was in the process of migrating systems from COBOL to a newer language, but that the effort was postponed by the virus. New Jersey governor Phil Murphy wondered why such an old language was being used on vital state government systems, and classed it with the many weaknesses in government systems the virus has revealed.

The truth is, organizations often hesitate to change those old systems because they still work, and migrating to new systems is expensive. Massive upgrades also involve writing new code, which may contain bugs, Reiher says. In the worst-case scenario, bugs might cause the loss of customer financial data being moved from the old system to the new.
IT STILL WORKS (MOSTLY)
COBOL, though ancient, is still considered stable and reliable—at least under normal conditions.

The current glitches with state unemployment problems are “probably not a specific flaw in the COBOL language or in the underlying implementation,” Reiher says. “The problem is more likely that some states are asking their computer systems to work with data on a far higher scale, he said, and making the systems do things they’ve never been asked to do.”

COBOL was developed in the early 1960s by computer scientists from universities, mainframe manufacturers, the defense and banking industries, and government. Based on ideas developed by programming pioneer Grace Hopper, it was driven by the need for a language that could run on a variety of different kinds of mainframes.

“It was developed to do specific kinds of things like inventory and payroll and accounts receivable,” Reiher told me. “It was widely used in 1960s by a lot of banks and government agencies when they first started automating their systems.”

Here in the 21st century, COBOL is still quietly doing those kinds of things. Millions of lines of COBOL code still run on mainframes used in banks and a number of government agencies, including the Department of Veterans Affairs, Department of Justice, and Social Security Administration. A 2017 Reuters report said 43% of banking systems still use COBOL.

But the move to newer languages such as Java, C, and Python is making its way through industries of all sorts, and will eventually be used in new systems used by banks and government. One key reason for the migration is that mobile platforms use newer languages, and they rely on tight integration with underlying systems to work the way users expect.

The coronavirus will be a catalyst for a lot of changes in the coming years, some good, some bad. The migration away from the programming languages of another era may be one of the good ones.

https://www.fastcompany.com/90488862/what-is-cobol

My previous job as a system integrator has given me first hand experience on this issue. Most of the projects I handeld were migration from an old/obsolete system to a newer one (mostly DCS). The most obvious advantage of these projects is that we have a system that is still working. The challenge that we had was translating the source code of the old systems into the new one. When they couldn't be translated as one to one correspondence, we need to use process control narration as intermediary. Often times we couldn't get access to the source code due to the oldness of the system, missing parts of documentation such as hardcopy of ladder diagram, function block diagram, sequential function chart, proprietary scripts, or due to corrupted floppy disks. So we had to rely on additional information provided by the process operators and supervisors about how the system was supposed to work.
On the other hand, in new systems we don't have the source code, so we have to translate from the control narrations provided by the process engineers. There is no guarantee that the system will work as intended. Often times, we had to make tweaking, adjustments, even some major modifications during the project commissioning.
If only we had a universal modelling system/platform, we could save a lot of time and effort to finish the projects. The system migrations could then be done automatically.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2020 05:29:32
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.
Quote
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
https://pathmind.com/wiki/neural-network
Quote
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.

That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2020 05:52:53
This Is What Tesla's Autopilot Sees On The Road.

Essentially, it builds a virtual environment in its computer based on input data from visual cameras and radar. With more of autopilot cars on the road, a lot of data being processed become redundant. Sharing those data can be the next step to increase efficiency of the whole system. It will require agreed protocol, data structure, and algorithm to interpret them properly. This brings us one step closer to a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/08/2020 11:10:14
We are increasingly rely on artificial intelligence to make decisions. But we must be aware of the risk that it poses, like those described in the article below.
https://thegradient.pub/shortcuts-neural-networks-love-to-cheat/
Quote
Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions. Shortcut opportunities come in many flavors and are ubiquitous across datasets and application domains. A few examples are visualized here:
(https://thegradient.pub/content/images/2020/07/image5-5.png)
At a principal level, shortcut learning is not a novel phenomenon: variants are known under different terms such as “learning under covariate shift”, “anti-causal learning”, “dataset bias”, the “tank legend” and the “Clever Hans effect”. We here discuss how shortcut learning unifies many of deep learning’s problems and what we can do to better understand and mitigate shortcut learning.

What is a shortcut?

In machine learning, the solutions that a model can learn are constrained by data, model architecture, optimizer and objective function. However, these constraints often don’t just allow for one single solution: there are typically many different ways to solve a problem. Shortcuts are solutions that perform well on a typical test set but fail under different circumstances, revealing a mismatch with our intentions.
Quote
Shortcut learning beyond deep learning

Often such failures serve as examples for why machine learning algorithms are untrustworthy. However, biological learners suffer from very similar failure modes as well. In an experiment in a lab at the University of Oxford, researchers observed that rats learned to navigate a complex maze apparently based on subtle colour differences - very surprising given that the rat retina has only rudimentary machinery to support at best somewhat crude colour vision. Intensive investigation into this curious finding revealed that the rats had tricked the researchers: They did not use their visual system at all in the experiment and instead simply discriminated the colours by the odour of the colour paint used on the walls of the maze. Once smell was controlled for, the remarkable colour discrimination ability disappeared.

Animals often trick experimenters by solving an experimental paradigm (i.e., dataset) in an unintended way without using the underlying ability one is actually interested in. This highlights how incredibly difficult it can be for humans to imagine solving a tough challenge in any other way than the human way: Surely, at Marr’s implementational level there may be differences between rat and human colour discrimination. But at the algorithmic level there is often a tacit assumption that human-like performance implies human-like strategy (or algorithm). This “same strategy assumption” is paralleled by deep learning: even if DNN units are different from biological neurons, if DNNs successfully recognise objects it seems natural to assume that they are using object shape like humans do. As a consequence, we need to distinguish between performance on a dataset and acquiring an ability, and exercise great care before attributing high-level abilities like “object recognition” or “language understanding” to machines, since there is often a much simpler explanation:

Never attribute to high-level abilities that which can be adequately explained by shortcut learning.
Quote
The consequences of this behaviour are striking failures in generalization. Have a look at the figure below. On the left side there are a few directions in which humans would expect a model to generalize. A five is a five whether it is hand-drawn and black and white or a house number photographed in color. Similarly slight distortions or changes in pose, texture or background don’t influence our prediction about the main object in the image. In contrast a DNN can easily be fooled by all of them. Interestingly this does not mean that DNNs can’t generalize at all: In fact, they generalize perfectly well albeit in directions that hardly make sense to humans. The right side of the figure below shows some examples that range from the somewhat comprehensible - scrambling the image to keep only its texture - to the completely incomprehensible.
(https://thegradient.pub/content/images/2020/07/image1.png)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2020 10:42:06
This patent by Tesla is a clue that in the future, virtual universe will be built mostly autonomously by AI.
https://www.tesmanian.com/blogs/tesmanian-blog/tesla-published-a-patent-generating-ground-truth-for-machine-learning-from-time-series-elements
Quote
Deep learning systems used for applications such as autonomous driving are developed by training a machine learning model. Typically, the performance of the deep learning system is limited at least in part by the quality of the training set used to train the model.

In many instances, significant resources are invested in collecting, curating, and annotating the training data. Traditionally, much of the effort to curate a training data set is done manually by reviewing potential training data and properly labeling the features associated with the data.

The effort required to create a training set with accurate labels can be significant and is often tedious. Moreover, it is often difficult to collect and accurately label data that a machine learning model needs improvement on. Therefore, there exists a need to improve the process for generating training data with accurate labeled features.

Tesla published patent 'Generating ground truth for machine learning from time series elements'

Patent filing date: February 1, 2019
Patent Publication Date: August 6, 2020

(https://cdn.shopify.com/s/files/1/0173/8204/7844/files/1_660faf20-c36a-4f67-8e63-11c5d4078119_1024x1024.jpg?v=1596750740)

The patent disclosed a machine learning training technique for generating highly accurate machine learning results. Using data captured by sensors on a vehicle a training data set is created. The sensor data may capture vehicle lane lines, vehicle lanes, other vehicle traffic, obstacles, traffic control signs, etc.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2020 10:59:59
Here is a research article about The information catastrophe.
https://aip.scitation.org/doi/10.1063/5.0019941
Quote
Currently, we produce ∼1021 digital bits of information annually on Earth. Assuming a 20% annual growth rate, we estimate that after ∼350 years from now, the number of bits produced will exceed the number of all atoms on Earth, ∼1050. After ∼300 years, the power required to sustain this digital production will exceed 18.5 × 1015 W, i.e., the total planetary power consumption today, and after ∼500 years from now, the digital content will account for more than half Earth’s mass, according to the mass-energy–information equivalence principle. Besides the existing global challenges such as climate, environment, population, food, health, energy, and security, our estimates point to another singular event for our planet, called information catastrophe.

(https://aip.scitation.org/na101/home/literatum/publisher/aip/journals/content/adv/2020/adv.2020.10.issue-8/5.0019941/20200810/images/small/5.0019941.figures.online.f3.gif)
Quote
In conclusion, we established that the incredible growth of digital information production would reach a singularity point when there are more digital bits created than atoms on the planet. At the same time, the digital information production alone will consume most of the planetary power capacity, leading to ethical and environmental concerns already recognized by Floridi who introduced the concept of “infosphere” and considered challenges posed by our digital information society.27 These issues are valid, regardless of the future developments in data storage technologies. In terms of digital data, the mass–energy–information equivalence principle formulated in 2019 has not yet been verified experimentally, but assuming this is correct, then in not the very distant future, most of the planet’s mass will be made up of bits of information. Applying the law of conservation in conjunction with the mass–energy–information equivalence principle, it means that the mass of the planet is unchanged over time. However, our technological progress inverts radically the distribution of the Earth’s matter from predominantly ordinary matter to the fifth form of digital information matter. In this context, assuming the planetary power limitations are solved, one could envisage a future world mostly computer simulated and dominated by digital bits and computer code.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2020 10:48:14
In this article we can see that data compression and decompression play central role in learning and modelling, no matter if they're done by machines or biological entities.
https://www.zdnet.com/google-amp/article/what-is-gpt-3-everything-business-needs-to-know-about-openais-breakthrough-ai-language-program/
Quote
When the neural network is being developed, called the training phase, GPT-3 is fed millions and millions of samples of text and it converts words into what are called vectors, numeric representations. That is a form of data compression. The program then tries to unpack this compressed text back into a valid sentence. The task of compressing and decompressing develops the program's accuracy in calculating the conditional probability of words.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2020 08:49:38
https://www.businessinsider.com/developer-sharif-shameem-openai-gpt-3-debuild-2020-9
Quote
In July, Debuild cofounder and CEO Sharif Shameem tweeted about a project he created that allowed him to build a website simply by describing its design. 

In the text box, he typed, "the google logo, a search box, and 2 lightgrey buttons that say 'Search Google' and 'I'm Feeling Lucky." The program then generated a virtual copy of the Google homepage.


This program uses GPT-3, a "natural language generation" tool from research lab OpenAI, which was cofounded by Elon Musk. GPT-3 was trained on massive swathes of data and can spit our results that mimic human writing. Developers have used it for creative writing, designing websites, writing business memos, and more. Now, Shameem is using GPT-3 for Debuild, a no-code tool for building web apps just by describing what they look like and how they work.

With this program, the user just needs to type in and describe what the application will look like and how it will work, and the tool will create a website based on those descriptions.

https://syncedreview.com/2020/09/10/openai-gpt-f-delivers-sota-performance-in-automated-mathematical-theorem-proving/
Quote
San Francisco-based AI research laboratory OpenAI has added another member to its popular GPT (Generative Pre-trained Transformer) family. In a new paper, OpenAI researchers introduce GPT-f, an automated prover and proof assistant for the Metamath formalization language.

While artificial neural networks have made considerable advances in computer vision, natural language processing, robotics and so on, OpenAI believes they also have potential in the relatively underexplored area of reasoning tasks. The new research explores this potential by applying a transformer language model to automated theorem proving.

It seems like in the future we will become less dependent on biological computational resources (i.e. brain).
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/10/2020 02:49:58
With virtual universe, we will get less surprises, so we can make better plans to achieve our goals. It can help us improve our survival chance which is a prerequisite (i.e. an instrumental goal) to achieve the universal terminal goal.
Here is one of the latest progress we have made to get closer to that goals.
https://scitechdaily.com/esas-%CF%86-week-digital-twin-earth-quantum-computing-and-ai-take-center-stage/
Quote
The third edition of the Φ-week event, which is entirely virtual, focuses on how Earth observation can contribute to the concept of Digital Twin Earth – a dynamic, digital replica of our planet which accurately mimics Earth’s behavior. Constantly fed with Earth observation data, combined with in situ measurements and artificial intelligence, the Digital Twin Earth provides an accurate representation of the past, present, and future changes of our world.

Digital Twin Earth will help visualize, monitor, and forecast natural and human activity on the planet. The model will be able to monitor the health of the planet, perform simulations of Earth’s interconnected system with human behavior, and support the field of sustainable development, therefore, reinforcing Europe’s efforts for a better environment in order to respond to the urgent challenges and targets addressed by the Green Deal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 04:21:10
As I mentioned earlier in another thread, cost saving is a universal instrumental goal. It also applies in AI research.
https://syncedreview.com/2020/10/02/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs/
Quote
It’s no coincidence that Transformer neural network architecture is gaining popularity across so many machine learning research fields. Best known for natural language processing (NLP) tasks, Transformers not only enabled OpenAI’s 175 billion parameter language model GPT-3 to deliver SOTA performance, the power- and potential-packed architecture also helped DeepMind’s AlphaStar bot defeat professional StarCraft players. Researchers have now introduced a way to make Transformers more compute-efficient, scalable and accessible.

While previous learning approaches such as RNNs suffered from vanishing gradient problems, Transformers’ game-changing self-attention mechanism eliminated such issues. As explained in the paper introducing Transformers — Attention Is All You Need, the novel architecture is based on a trainable attention mechanism that identifies complex dependencies between input sequence elements.

Transformers however scale quadratically when the number of tokens in an input sequence increases, making their use prohibitively expensive for large numbers of tokens. Even when fed with moderate token inputs, Transformers’ gluttonous appetite for computational resources can be difficult for many researchers to satisfy.

A team from Google, University of Cambridge, DeepMind, and Alan Turing Institute have proposed a new type of Transformer dubbed Performer, based on a Fast Attention Via positive Orthogonal Random features (FAVOR+) backbone mechanism. The team designed Performer to be “capable of provably accurate and practical estimation of regular (softmax) full rank attention, but of only linear space and timely complexity and not relying on any priors such as sparsity or low-rankness.”
Title: Re: How close are we from building a virtual universe?
Post by: mikahawkins on 12/10/2020 05:55:08
Are we trying to visualize something with lifeforms or without lifeforms ? I believe we can start off with one step at a time, first getting the solar system together then the galaxies and so on.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 07:29:44
Are we trying to visualize something with lifeforms or without lifeforms ? I believe we can start off with one step at a time, first getting the solar system together then the galaxies and so on.
It's a universal inevitability that in order to achieve the universal terminal goal, a conscious system will have to build some kind of virtual universe as close as possible to the real/objective reality in the universe, which can be described in terms of accuracy and precision. Due to limited resources, according to Pareto principle, we must spend more resources to things which have more impacts to the achievement of the universal terminal goal. That's why Google map has higher resolution for areas with high interest such as big cities compared to deserts or oceans.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 15:03:57
Talking about lifeform, how do you  define it? Would you call Henrietta Lack's tumor cells alive? What about corona virus? prion? Alexa?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/10/2020 12:42:09
A few decades ago, most process equipments were dumb. They need periodic maintenance performed by humans to diagnose their functional condition and find abnormalities. So basically their conditions are uncertain until they're broken or maintenance personnels check/test them. Control loops need to be periodically fine tuned to keep them in best performance due to physical changes in the field instrumentations and the process itself.
Now a lot of equipments are getting smart. Smart transmitters and positioners has been widely used. There are also smart variable speed drive and other equipment controllers. They have self diagnostic feature to tell techinicians wether or not they are in a good condition, and point out abnormalities so the problem can be fixed sooner. Those diagnostic data can be continuously monitored from a remote area. Those smart equipments can be considered to have some form of self awareness.
In a SCADA system, a bot can be deployed to continuously monitor functionality of each control loop. Thousand of them can run in the same server. This forces us to review traditional concept of individuality, especially regarding those conscious agents.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/10/2020 12:51:35
Here is an interesting article covering AGI.
https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/

Quote
  The tricky part comes next: yoking multiple abilities together. Deep learning is the most general approach we have, in that one deep-learning algorithm can be used to learn more than one task. AlphaZero used the same algorithm to learn Go, shogi (a chess-like game from Japan), and chess. DeepMind’s Atari57 system used the same algorithm to master every Atari video game. But the AIs can still learn only one thing at a time. Having mastered chess, AlphaZero has to wipe its memory and learn shogi from scratch.

Legg refers to this type of generality as “one-algorithm,” versus the “one-brain” generality humans have. One-algorithm generality is very useful but not as interesting as the one-brain kind, he says: “You and I don’t need to switch brains; we don’t put our chess brains in to play a game of chess.” 

Here are the steps toward development of AGI.
Quote
   Roughly in order of maturity, they are:
Unsupervised or self-supervised learning. Labeling data sets (e.g., tagging all pictures of cats with “cat”) to tell AIs what they’re looking at during training is the key to what’s known as supervised learning. It’s still largely done by hand and is a major bottleneck. AI needs to be able to teach itself without human guidance—e.g., looking at pictures of cats and dogs and learning to tell them apart without help, or spotting anomalies in financial transactions without having previous examples flagged by a human. This, known as unsupervised learning, is now becoming more common.

Transfer learning, including few-shot learning. Most deep-learning models today can be trained to do only one thing at a time. Transfer learning aims to let AIs transfer some parts of their training for one task, such as playing chess, to another, such as playing Go. This is how humans learn.

Common sense and causal inference. It would be easier to transfer training between tasks if an AI had a bedrock of common sense to start from. And a key part of common sense is understanding cause and effect. Giving common sense to AIs is a hot research topic at the moment, with approaches ranging from encoding simple rules into a neural network to constraining the possible predictions that an AI can make. But work is still in its early stages.

Learning optimizers. These are tools that can be used to shape the way AIs learn, guiding them to train more efficiently. Recent work shows that these tools can be trained themselves—in effect, meaning one AI is used to train others. This could be a tiny step toward self-improving AI, an AGI goal.




Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/10/2020 06:40:12
Here I found a very interesting video about neural network revolution that I'd like to share here.
Quote
Geoffrey Hinton is an Engineering Fellow at Google where he manages the Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. He is also the Chief Scientific Adviser of the new Vector Institute and an Emeritus Professor at the University of Toronto.

Recorded: December 4th, 2017
I see this neural network revolution as a continuation of neural network evolution that has been happening for hundreds of million years and produced brains which kickstarted the revolution.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/11/2020 04:11:48
The risk of using GPT irresponsibly is self confirmation bias which may obstruct from getting optimum results.

https://twitter.com/karpathy/status/1284660899198820352?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1284667692381872128%7Ctwgr%5Eshare_3%2Ccontainerclick_1&ref_url=https%3A%2F%2Fwww.technologyreview.com%2F2020%2F07%2F20%2F1005454%2Fopenai-machine-learning-language-generator-gpt-3-nlp%2F

Quote
Andrej Karpathy
@karpathy
·
Jul 19
By posting GPT generated text we’re polluting the data for its future versions
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2020 04:23:51
The dog's behavior is not entirely surprising either. Especially if you have some future version of neuralink implanted on its head, or you are a veterinarian.

Here is the definition of intelligence accorsing to dictionary.
Quote
  the ability to acquire and apply knowledge and skills.
Usually, it represents problem solving or information processing capability, but doesn't take into account the ability to manipulate its environment nor self awareness.
AlphaGo is considered intelligent since it can solve problem of playing go better then human champion. Alpha zero is even more intelligent since it can beat Alpha Go 100:0.
Even though they don't have the ability to move any piece of go.
On the other hand, consciousness covers more factors into account. For example, if you got paralyzed so you can't move your arms and legs, you are considered less conscious than your normal state, even though you can still think clearly.
Traditionally, an agent is considered intelligent if it can solve problem, especially when it's better than expectation. A dog who can get you newspaper is considered intelligent.

https://en.wikipedia.org/wiki/Artificial_intelligence
Quote
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]
Quote
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[70]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[3] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Perform actions mathematically similar to ones that succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[71] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[72] Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[73]

https://en.wikipedia.org/wiki/AI_effect
Quote
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.[1]

Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]
Quote
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.

Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."[4]

When IBM's chess playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence.[5] Fred Reed writes:

"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."[6]

Douglas Hofstadter expresses the AI effect concisely by quoting Larry Tesler's Theorem:

"AI is whatever hasn't been done yet."[7]

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as human-assisted Turing machine.[8]

AI applications become mainstream
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[9]

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[10]

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[11]

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[12]
Quote
Saving a place for humanity at the top of the chain of being
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[14] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[citation needed]

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[15]




I'd like to delve technically deeper into the problem in this thread.
Quote
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[12][13] followed by disappointment and the loss of funding (known as an "AI winter"),[14][15] followed by new approaches, success and renewed funding.[13][16] After AlphaGo successfully defeated a professional Go player in 2015, artificial intelligence once again attracted widespread global attention.[17] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[18] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[19] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[22][23][24] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[18]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[19] General intelligence is among the field's long-term goals.[25] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[26] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[31] Some people also consider AI to be a danger to humanity if it progresses unabated.[32][33] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[34]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[35][16]
Quote
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[71]
Quote
Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world.[citation needed] These learners could therefore derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is seldom possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering a broad range of possibilities unlikely to be beneficial.
https://en.wikipedia.org/wiki/Artificial_intelligence

Intelligent agents are expected to have the ability to learn from raw data. It means that they have tools to pre-process those raw data to filter out noises or flukes and extract useful information. When those agents interact with one another, especially when they must compete for finite resources, the more important is the ability to filter out misinformation. It requires an algorithm to determine if some data inputs are believable or not. At this point we are seeing that artificial intelligence is getting closer to natural intelligence. This exhibits a feature similar to critical thinking of conscious beings.
Descartes has pointed out that the only self evident information a conscious agent can get is its own existence. Any other information requires corroborating evidences to support it. So in the end, the reliability of an information will be measured/valued by its ability to help preserving conscious agents.


Quote
In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer.[1]

When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors.[1] After this learning step, a DBN can be further trained with supervision to perform classification.[2]

DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs)[1] or autoencoders,[3] where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set).

The observation[2] that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms.[4]:6 Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography,[5] drug discovery[6][7][8]).
https://en.wikipedia.org/wiki/Deep_belief_network
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2020 04:40:29
The video shows what the future will look like. It's a step closer toward building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: Xeon on 12/11/2020 10:23:13
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.


Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.
What law says that our memories are stored in our minds ! How do we know that we are not just accessing a mainframe server and we are no more than confused bots .
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/11/2020 09:28:18
What law says that our memories are stored in our minds ! How do we know that we are not just accessing a mainframe server and we are no more than confused bots .

There are no such law AFAIK. But here is what we know.
Descartes has pointed out that the only self evident information a conscious agent can get is its own existence. Any other information requires corroborating evidences to support it. So in the end, the reliability of an information will be measured/valued by its ability to help preserving conscious agents.
If two or more hypotheses are equally capable of explaining observations, Occam's razor suggests us to choose the simplest one. I've asserted in another thread that efficiency is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/11/2020 06:26:10
Brains contain some compressed and partial version of virtual universe in the form of neurons and neural connection states.  Object counting is a part of extracting information from raw data coming in through sensory organs. This video tells us how brains count.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2020 11:19:34
Here is a recent progress toward building a virtual universe.
https://singularityhub.com/2020/11/22/the-trillion-transistor-chip-that-just-left-a-supercomputer-in-the-dust/
Quote
The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”
Quote
Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.
Quote
Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 23/11/2020 12:30:24
The problem throughout is that you are trying to define the solution without defining the problem. My artificial horizon is an adequate virtual universe if the problem is to keep the plane flying straight and level with no visual reference. The GPS moving map adds just enough data if I want to get somewhere, and the ILS gives me a virtual beeline to the runway threshold. Each of these solutions began with a clear statement of the problem.

The joy of full autopilot was demonstrated by a couple of 737 fatal incidents in recent memory. It's OK until it goes wrong and crashes you precisely on the runway centerline, unlike the human who is generally "good enough" to land somewhere (like the middle of the Hudson river) without breaking too much. I've just completed a paper exercise where the radio died in fog at night. The automatic answer is to follow a published instrument approach on enhanced GPS and autopilot, which will take you to your destination within +/- a couple of feet. Problem is that you don't know who else is on that track, so the more closely you follow it, the more likely you are to collide or cause panic. The human  answer is to assume that everyone else is on track and avoid it by a mile laterally and 1000 ft vertically until the last possible moment.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2020 15:43:50
The problem throughout is that you are trying to define the solution without defining the problem. My artificial horizon is an adequate virtual universe if the problem is to keep the plane flying straight and level with no visual reference. The GPS moving map adds just enough data if I want to get somewhere, and the ILS gives me a virtual beeline to the runway threshold. Each of these solutions began with a clear statement of the problem.



This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I've stated the problem in another thread, which is to reduce the risk of existential threat to conscious beings down to zero. Building an accurate and precise virtual universe is one method to achieve that goal by reducing uncertainty and helping to make decisions effectively and efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2020 12:04:55
By building a virtual universe, we can make trial and error more efficiently. That's essentially what AlphaGo and AlphaZero have done to master the game. 

Quote
Learn from the mistakes of others, you can never live long enough to make them all yourself.
Groucho Marx   
Having an accurate and precise virtual universe, vaccine to a newly emerging virus could be developed in minutes instead of years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2020 15:05:38
AGI is the last invention humans ever need to make.
https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html
Quote
   Meet GPT-3. It Has Learned To Code (and Blog and Argue).
The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs.
Quote
  This summer, an artificial intelligence lab in San Francisco called OpenAI unveiled a technology several months in the making. This new system, GPT-3, had spent those months learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.

Mckay Wrigley, a 23-year-old computer programmer from Salt Lake City, was one of the few invited to tinker with the system, which uses everything it has learned from that vast sea of digital text to generate new language on its own. Mr. Wrigley wondered if it could imitate public figures — write like them, perhaps even chat like them. 
Quote
  One of his experiments involved a pop psychologist, Scott Barry Kaufman. The system took in Mr. Kaufman’s name and a topic for discussion: creativity. Then, when asked “How do we become more creative?” GPT-3 responded instantly:

I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.

Later, when Mr. Wrigley posted the paragraph on Twitter, somebody looped in the real Scott Barry Kaufman. He was stunned. “It definitely sounds like something I would say,” the real Mr. Kaufman tweeted, later adding, “Crazy accurate A.I.” 
Rapid advancement of AI with its exponential growth nature seems to hint that singularity is near.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2020 03:27:16
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

Virtual universe that I described previously is similar to knowledge graphs as described in the article below.

https://www.zdnet.com/article/rebooting-ai-deep-learning-meet-knowledge-graphs/
Quote
Rebooting AI: Deep learning, meet knowledge graphs
Gary Marcus, a prominent figure in AI, is on a mission to instill a breath of fresh air to a discipline he sees as in danger of stagnating. Knowledge graphs, the 20-year old hype, may have something to offer there.

"This is what we need to do. It's not popular right now, but this is why the stuff that is popular isn't working." That's a gross oversimplification of what scientist, best-selling author, and entrepreneur Gary Marcus has been saying for a number of years now, but at least it's one made by himself.

The "popular stuff which is not working" part refers to deep learning, and the "what we need to do" part refers to a more holistic approach to AI. Marcus is not short of ambition; he is set on nothing else but rebooting AI. He is not short of qualifications either. He has been working on figuring out the nature of intelligence, artificial or otherwise, more or less since his childhood.

Questioning deep learning may sound controversial, considering deep learning is seen as the most successful sub-domain in AI at the moment. Marcus on his part has been consistent in his critique. He has published work that highlights how deep learning fails, exemplified by language models such as GPT-2, Meena, and GPT-3.
Quote
Deep learning, meet knowledge graphs
When asked if he thinks knowledge graphs can have a role in the hybrid approach he advocates for, Marcus was positive. One way to think about it, he said, is that there is an enormous amount of knowledge that's represented on the Internet that's available essentially for free, and is not being leveraged by current AI systems. However, much of that knowledge is problematic:

"Most of the world's knowledge is imperfect in some way or another. But there's an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that.

Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you're a deep learning system, you can't get anything out of that at all, or hardly anything.

Wikipedia is the stuff that's on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There's all kinds of knowledge there, too. It's also being left on the floor by current approaches.

The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff.

We're going to have to get to AI systems that can use the collective human knowledge that's expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems."
(https://zdnet3.cbsistatic.com/hub/i/2017/05/01/ce8926a1-9a41-42b6-9bd0-92df1b4171f6/deeplearningiconsr5png-jpg.png)
There is more to AI than Machine Learning, and there is more to Machine Learning than deep learning. Gary Marcus is arguing for a hybrid approach to AI, reconnecting it with its roots. Image: Nvidia
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/12/2020 22:44:55
Compared to my other threads discussing the universal terminal goal, this one seems to be underdeveloped. To complement my own thought, I'll just drop some important latest research in the field of artificial intelligence, just like this one.
Inductive Biases for Deep Learning of Higher-Level Cognition
Anirudh Goyal, Yoshua Bengio
Quote
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.   
https://arxiv.org/abs/2011.15091?s=03
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/12/2020 09:22:25
Here is another great video covering current development of AI.
Timestamps for this video:
00:00 Introduction
00:45 Humanity's Next Chapter
03:38 Cathie Wood Discusses AlphaFold
08:40 Elon Musk's Dire Warning
10:20 Netflix Recommends Your Doom
14:09 Detecting Cats and Dogs
15:45 ARK's James Wang on Deep Learning
17:09 The Singularity is Near
Title: Re: How close are we from building a virtual universe?
Post by: syhprum on 07/12/2020 13:23:57
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
There is a perfectly good abbreviation to indicate that you mean to the power on all the keyboards that I have used "^" but maybe the articles are written on a pocket device that lacks this abbreviation . 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 02:45:13
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
There is a perfectly good abbreviation to indicate that you mean to the power on all the keyboards that I have used "^" but maybe the articles are written on a pocket device that lacks this abbreviation . 
Perhaps the simplest explanation is typo. The key wasn't pressed hard enough to be sensed by the keyboard. There is no autocorrect for this kind of error that I know of.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 03:01:27
This article just came into my mailbox, and I'd like to share it here since it's closely related to the topic.
Quote
NEWSLETTER ON LINKEDIN
Artificial Intelligence (AI)
 By Bernard Marr

Open this article on LinkedIn to see what people are saying about this topic. Open on LinkedIn

Future Trends And Technology – Insights from Ericsson
 
Innovation and new thought is what makes the world go round. Behind all the ground-breaking technologies such as AI and automation are human minds that are willing to push boundaries and think differently about solving problems, in both business and society.
Investing in true innovation – how to use technology to do different things, as opposed to just doing things differently – has led to sweeping changes in how we communicate, work together, play and look after our health in recent years. In particular, it has allowed businesses and organizations to get closer to their most important asset – the people who use or consume their services – than ever before. This is thanks to the ever-smarter ways in which we are capturing data and using it to overcome challenges, from understanding customer behavior to creating vaccines.
I was fortunate enough to get the chance to talk to two people who are working on this cutting-edge – Jasmeet Sethi and Cristina Pandrea, of Ericsson's ConsumerLab. This is the division within Ericsson responsible for research into current and emerging trends – with a specific focus on how they are being used in the real world today, and what that might mean for tomorrow.
During our conversation, we touched on five key trends that have been identified by the ConsumerLab, which has been collecting and analyzing data on how people interact with technology for more than 20 years. One thing they all have in common is that every one of them has come into its own during the current global pandemic. This is usually for one of two reasons – either because necessity has driven a rapid increase in the pace of adoption, or because they provide a new approach to tackling problems society is currently facing.
Let's look at each of the five trends in turn.
1. Resilient networks
In 2020, more than ever before, we've been dependant on the stability and security of IT systems and networks to keep the world running. As well as the importance of uptime and core stability when it comes to allowing businesses to switch to work-from-home models, It's been shown that cyber attacks have increased dramatically during the pandemic, meaning security is more vital than ever before.
Many of the international efforts to trace the spread of the disease, understand people's behavior in pandemic situations, and to develop vaccines and cures are dependent on the transfer of huge volumes of digital data. Ericsson believes that the amount of data transferred has increased by 40% over mobile networks and 70% over wired broadband networks since the start of the pandemic. So ensuring that infrastructure is reliable and secure has never been so important. The fact that network operators have largely been successful at this hasn't gone unnoticed, Sethi tells me – with customers thanking them with a noticeably higher level of loyalty.
2. Tele-health
Medical consultation, check-ups, examinations, and even diagnoses were increasingly being carried out remotely, even pre-covid, particularly in remote regions or areas where there is a shortage of clinical staff. However, during 2019 they made up just 19% of US healthcare contacts. Ericsson's research has shown that this increased to around 46% during 2020. This is clearly an example of a trend where the pandemic accelerated a change that was already happening. So it's likely that providers will be keen to carry on receiving the benefits they've generated, as we eventually move into a post-covid world.
Here a key challenge comes from the fact that a number of different technologies need to be working together in harmony to ensure patient care doesn't suffer, from video streaming to cloud application platforms and network security protocols. 
3. Borderless workplaces
We saw the impossible happen in 2020 as thousands of organizations mobilized to make remote working possible for their workforces in a very short period of time. But this trend goes beyond "eternal WFH" and points to a future where we have greater flexibility and freedom over where we spend our working hours. Collaborative workplace tools like Zoom and Slack meant the switchover was often relatively hassle-free, and next-generation tools will cater for a future where employees can carry out their duties from anywhere, rather than just stuck at their kitchen tables.
But this shift in social norms brings other problems, such as the danger of isolation, the difficulty between striking a balance between home and work life, or a diminished ability to build a culture within an organization. Solutions in this field look to tackle these challenges, too, rather than simply give us more ways to be connected to the office 24/7.
4. The Experience / Immersive Economy
Touching on issues raised by the previous trend, Ericsson has experimented with providing employees with virtual reality headsets, to make collaborative working more immersive. Pandrea described the benefits of this to me – "The experience was really genuine, it took us by surprise … we'd seen virtual reality before, but this was the first time where we saw 25 people in the same virtual room, having this experience … when you see the others as avatars you get the feeling of being together, it makes a world of difference."
This trend involves creating experiences that mean as little as possible is lost when you move an interaction or event from the real world to the virtual world. Virtual and augmented reality have an important role here, but Sethi points beyond this to an idea he calls the "internet of senses," where devices can feed information to us through all of our five senses. Breakthrough technologies such as the Teslasuit use haptic feedback to greatly increase the feeling of presence in virtual spaces, and is used by NASA to train astronauts. Other innovators in this field are working on including our sense of smell, by dispensing fragrances from headset attachments.
Another interesting change related to this field that's been predicted is the rise in the value put on virtual commodities and status versus material goods. Children these days are just as likely to talk boastfully about a rare Fortnite skin, Rocket League car, or Roblox pet as they would about any physical product or status symbol. "If you look at young millionaires they're already driven by virtual status – who has the best status in esports, the number of followers … this trend will be accelerated as we move into the virtual experience economy", Sethi predicts.
5. Autonomous Commerce
Two massive changes to the way we live our lives due to the pandemic have been a big acceleration in the uptake of online retail, and a move away from cash towards contactless payment methods. Cashiers were already being replaced by self-checkouts at a rapid pace pre-2020. But the pickup in speed this year brings us to a point where KFC is operating fully autonomous mobile food trucks in Shanghai. The trucks pilot themselves to customers and serve up socially-distanced meals with no human involvement.
The rush to keep up with changing consumer behavior has also sped up the adoption of cash-free and contactless retail, particularly in emerging markets where cash has traditionally been king. Financial services businesses tapping into technology like 5G networking and AI-powered fraud detection tools are responding to new expectations from customers in this field and, if they are able to predict that behavior accurately, are likely to see strong growth in coming years.
Investing in innovation
Remaining on the cutting-edge of these trends means investing strategically in new ideas and innovation. So we also talked about Ericsson's Startup 5g program, which Pandrea heads up. Here the business looks to be at the head of the pack when it comes to creating the $31 trillion in revenue that it predicts will be generated by 5G platforms and services before 2030.
Pandrea tells me that it is expected that a lot of this will come from services that telcos can bundle with their 5G offerings to help make their customers' lives better. One of the star players is XR Space, which is building a social VR platform using its own hardware that could effectively allow workers to take their office (and entertainment world) with them anywhere they go.
Another is London-based Inception XR, that enables AR experiences to be created from books to help create more immersion and gamification in children's education.
And a third that Pandrea recommends keeping an eye on for a glimpse of the future is PlaySight. It uses AI-powered 360-degree 8k cameras at sports or entertainment events, capable of capturing the action in greater detail than ever before. That data can then be delivered to an audience in any number of ways, including putting them inside VR experiences that let them view from any angle as well as pause and rewind what they are seeing.
Underlying technologies
Clearly, we can see the common threads of broader tech trends that run through these very relevant trends Ericsson is identifying today. AI technologies, as well as extended reality (XR), which includes VR, AR, and mixed reality (MR), are behind the tools that secure our networks, enable us to work efficiently from anywhere, receive remote healthcare, create immersive experiences and conduct autonomous commerce. High-speed networking is essential to every one of them too, and the quantum leap in upload and download speeds of 5G is necessary to make them all possible.
And it's certainly also true that much of the technological progress that is driving real change in business, commerce, society and entertainment has happened in response to the dark times we are living through. But as we start to cautiously look ahead to hopefully brighter days, these trends will go on to play a part in building a safer, smarter and more convenient future. 

To learn more about any of the trends we've covered here, you can watch our conversation in full here. And you can also take part in the Ericsson Unboxed virtual event that will take place on Wednesday, December 9th. Register or find out more here.

Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. I have also written a new book about AI, click here for more information. To read my future posts simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Slideshare or YouTube.
About Bernard Marr
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things.
LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his 1.5 million social media followers and shares content that reaches millions of readers.
Join the conversation



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 03:22:20
These videos explain about new technology adoption.

What's the S adoption curve mean for disruptive technology like Tesla? What about a double S curve?!
Quote
Wherein Dr. Know-it-all explains what an "S" adoption curve is, how it has functioned historically for technology like automobiles/cars, the internet, cell phones, and even smart phones. And how it matters a great deal for Tesla and other EV companies who are currently disrupting internal combustion engine (ICE) car manufacturers. Also, what happens when the EV adoption curve lines up with the full self driving (FSD) adoption curve?? Watch and find out!
Quote
By the by, as folks have pointed out, and I probably should've noted in the video itself, Tony Seba has been talking about "the tipping point" for years. While I was inspired to work up this video from a Patreon patron, and I don't closely follow Seba, I should have acknowledged that a lot of this is derived from Tony's brilliant ideas over the years. One such video is here:
Tony Seba's conclusion in the end of his video that technological disruption will happen for mainly economic reason, and not necessarily due to interference by government, is aligned with my idea that efficiency is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 04:16:36
Currently, one of the most rapid adoption of some form of virtual universe is in the field of self driving cars. These videos explained it well.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 04:21:23
Tesla's Dojo is clearly aligned with the goal stated in this thread. The next progress is clearly to generalize it so it can be applied in more kinds of problems.
An earlier effort that I've tried was Microsoft Flight Simulator. Perhaps it was used by 911 perpetrators. That's why I think that morality problem of ai users need to be solved objectively, which I discuss in another thread.
With more powerful AI, and more accurate and precise virtual universe, the user's goal can be achieved more easily, including harmful ones. A universal terminal goal is then necessary to distinguish between good and bad goals or intentions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 06:03:38
From this short video we can infer that an accurate virtual universe can increase efficiency and reduce cost.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/12/2020 06:41:00
I'd like to share a report from a software company which converges with my idea presented here. Extrapolated further, we will eventually have to deal with a universal terminal goal, which I discussed in another thread.
 
Top 8 trends shaping digital transformation in 2021
Quote
IT’s role is more critical than ever in a world that’s increasingly dependent on digital. Organizations
are under increasing pressure to stay competitive and create connected experiences. According to our
Connectivity benchmark report, IT projects are projected to grow by 40%; and 82% of businesses are now
holding their IT teams accountable for delivering connected customer experiences.
To meet these rising demands, organizations are accelerating their digital transformation — which can be
defined as the wholesale move to thinking about how to digitize every part of the business, as every part
of the business now needs technology to operate. In order to drive scale and efficiency, IT must rethink its
operating model to deliver self-serve capabilities and enable innovation across the enterprise.
In this report, we will highlight some of the top trends facing CIOs, IT leaders, and organizations in their digital
transformation journey, sourcing data from both MuleSoft proprietary research and third-party findings.
Quote
The future of automation: declarative programming
Uri Sarid,
CTO, MuleSoft
“The mounting complexity brought on by an explosion
of co-dependent systems, dynamic data, and rising
expectations demands a new approach to software. More
is expected for software to just work automatically, and
more of us expect automation of our digital life and work.
In 2021, we’ll see more and more systems be intent-based,
and see a new programming model take hold: a declarative
one. In this model, we declare an intent — a desired goal or
end state — and the software systems connected via APIs
in an application network autonomously figure out how to
simply make it so.”

Quote
2021 will be the year that data separates organizations
from their competitors... and customers
Lindsey Irvine,
CMO, MuleSoft
“The reality is that the majority of businesses today, across all industries,
aren’t able to deliver truly connected experiences for their customers,
partners, and employees — and that’s because delivering connected
experiences requires a lot of data, which lives in an average of 900
different systems and applications across the enterprise. Integrating and
unifying data across these systems is critical to create a single view of the
customer and achieve true digital transformation.
“It’s also the number one reason digital transformation initiatives fail. As
the amount of systems and applications continue to grow exponentially,
teams realize that key to their success — and their organization’s success —
is unlocking the data, wherever it exists, in a way that helps them deliver
value faster.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2020 03:09:19
Building an accurate and precise virtual universe requires a sound and robust scientific method.

https://physicsworld.com/a/madness-in-the-method-why-your-notions-of-how-science-works-are-probably-wrong/

Quote
You know what the scientific method is until you try to define it: it’s a set of rules that scientists adopt to obtain a special kind of knowledge. The list is orderly, teachable and straightforward, at least in principle. But once you start spelling out the rules, you realize that they really don’t capture how scientists work, which is a lot messier. In fact, the rules exclude much of what you’d call science, and includes even more of what you don’t. You even begin to wonder why anyone thought it necessary to specify a “scientific method” at all.

In his new book The Scientific Method: an Evolution of Thinking from Darwin to Dewey, the University of Michigan historian Henry Cowles explains why some people thought it necessary to define “scientific method” in the first place. Once upon a time, he writes, science meant something like knowledge itself – the facts we discover about the world rather than the sometimes unruly way we got them. Over time, however, science came to mean a particular stepwise way that we obtain those facts independent of the humans who follow the method, and independent of the facts themselves.
Quote
Just as nature takes alternative forms of life and selects among them, Darwin argued, so scientists take hypotheses and choose the most robust. Nature has its own “method”, and humans acquire knowledge in an analogous way. Darwin’s scientific work on living creatures is indeed rigorous, as I think contemporary readers will agree, but in the lens of our notions of scientific method it was hopelessly anecdotal, psychological and disorganized. He was, after all, less focused on justifying his beliefs than on understanding nature.
Quote
Following Darwin, the American “pragmatists” – 19th-century philosophers such as Charles Peirce and William James – developed more refined accounts of the scientific method that meshed with their philosophical concerns. For Peirce and James, beliefs were not mental judgements or acts of faith, but habits that individuals develop through long experience. Beliefs are principles of action that are constantly tested against the world, reshaped and tested again, in an endless process. The scientific method is simply a careful characterization of this process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/12/2020 12:52:41
Here is another video describing latest progress in building a virtual universe. This time it's abot microscopic universe, but extremely important for living organisms.
Quote
This is Biology's AlexNet moment! DeepMind solves a 50-year old problem in Protein Folding Prediction. AlphaFold 2 improves over DeepMind's 2018 AlphaFold system with a new architecture and massively outperforms all competition. In this Video, we take a look at how AlphaFold 1 works and what we can gather about AlphaFold 2 from the little information that's out there.

OUTLINE:
0:00 - Intro & Overview
3:10 - Proteins & Protein Folding
14:20 - AlphaFold 1 Overview
18:20 - Optimizing a differentiable geometric model at inference
25:40 - Learning the Spatial Graph Distance Matrix
31:20 - Multiple Sequence Alignment of Evolutionarily Similar Sequences
39:40 - Distance Matrix Output Results
43:45 - Guessing AlphaFold 2 (it's Transformers)
53:30 - Conclusion & Comments

AlphaFold 2 Blog: https://deepmind.com/blog/article/alp...
AlphaFold 1 Blog: https://deepmind.com/blog/article/Alp...
AlphaFold 1 Paper: https://www.nature.com/articles/s4158...
MSA Reference: https://arxiv.org/abs/1211.1281
CASP14 Challenge: https://predictioncenter.org/casp14/i...
CASP14 Result Bar Chart: https://www.predictioncenter.org/casp...

Paper Title: High Accuracy Protein Structure Prediction Using Deep Learning

Abstract:
Proteins are essential to life, supporting practically all its functions. They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). This breakthrough demonstrates the impact AI can have on scientific discovery and its potential to dramatically accelerate progress in some of the most fundamental fields that explain and shape our world.

Authors: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Kathryn Tunyasuvunakool, Olaf Ronneberger, Russ Bates, Augustin Žídek, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Anna Potapenko, Andrew J Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Martin Steinegger, Michalina Pacholska, David Silver, Oriol Vinyals, Andrew W Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2020 08:30:14
Here is the newest progress toward generalization of artifiicial intelligence be DeepMind.
https://www.nature.com/articles/s41586-020-03051-4
Mastering Atari, Go, chess and shogi by planning with a learned model
Quote
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess1 and Go2, where a perfect simulator is available. However, in real-world problems, the dynamics governing the environment are often complex and unknown. Here we present the MuZero algorithm, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. The MuZero algorithm learns an iterable model that produces predictions relevant to planning: the action-selection policy, the value function and the reward. When evaluated on 57 different Atari games3—the canonical video game environment for testing artificial intelligence techniques, in which model-based planning approaches have historically struggled4—the MuZero algorithm achieved state-of-the-art performance. When evaluated on Go, chess and shogi—canonical environments for high-performance planning—the MuZero algorithm matched, without any knowledge of the game dynamics, the superhuman performance of the AlphaZero algorithm5 that was supplied with the rules of the game.
Quote
MuZero is trained only on data generated by MuZero itself; no external data were used to produce the results presented in the article. Data for all figures and tables presented are available in JSON format in the Supplementary Information.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2020 09:22:45
The Great Google Crash: The World’s Dependency Revealed

We long for the day when nobody runs anything.
Todd Underwood - Google SRE
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/12/2020 03:56:49
Sooner or later, people will realize that we are on progress of building an accurate virtual universe. Unless of course, if we got extinct beforehand.
https://twitter.com/elonmusk/status/1343002225916841985?s=03
Quote
Vaccines are just the start. It's also capable in theory of curing almost anything. Turns medicine into a software & simulation problem.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/12/2020 04:02:56
Here is the article Elon Musk was tweeting about.
https://berthub.eu/articles/posts/reverse-engineering-source-code-of-the-biontech-pfizer-vaccine/
Quote
Welcome! In this post, we’ll be taking a character-by-character look at the source code of the BioNTech/Pfizer SARS-CoV-2 mRNA vaccine.
Now, these words may be somewhat jarring - the vaccine is a liquid that gets injected in your arm. How can we talk about source code?

This is a good question, so let’s start off with a small part of the very source code of the BioNTech/Pfizer vaccine, also known as BNT162b2, also known as Tozinameran also known as Comirnaty.

(https://berthub.eu/articles/bnt162b2.png)
First 500 characters of the BNT162b2 mRNA. Source: World Health Organization

The BNT162b mRNA vaccine has this digital code at its heart. It is 4284 characters long, so it would fit in a bunch of tweets. At the very beginning of the vaccine production process, someone uploaded this code to a DNA printer (yes), which then converted the bytes on disk to actual DNA molecules.
(https://berthub.eu/articles/bioxp-3200.jpg)
A Codex DNA BioXp 3200 DNA printer

Out of such a machine come tiny amounts of DNA, which after a lot of biological and chemical processing end up as RNA (more about which later) in the vaccine vial. A 30 microgram dose turns out to actually contain 30 micrograms of RNA. In addition, there is a clever lipid (fatty) packaging system that gets the mRNA into our cells.

RNA is the volatile ‘working memory’ version of DNA. DNA is like the flash drive storage of biology. DNA is very durable, internally redundant and very reliable. But much like computers do not execute code directly from a flash drive, before something happens, code gets copied to a faster, more versatile yet far more fragile system.

For computers, this is RAM, for biology it is RNA. The resemblance is striking. Unlike flash memory, RAM degrades very quickly unless lovingly tended to. The reason the Pfizer/BioNTech mRNA vaccine must be stored in the deepest of deep freezers is the same: RNA is a fragile flower.

Each RNA character weighs on the order of 0.53·10⁻²¹ grams, meaning there are 6·10¹⁶ characters in a single 30 microgram vaccine dose. Expressed in bytes, this is around 25 petabytes, although it must be said this consists of around 2000 billion repetitions of the same 4284 characters. The actual informational content of the vaccine is just over a kilobyte. SARS-CoV-2 itself weighs in at around 7.5 kilobytes.
And the summary is below.
Quote
Summarising
With this, we now know the exact mRNA contents of the BNT162b2 vaccine, and for most parts we understand why they are there:

- The CAP to make sure the RNA looks like regular mRNA
- A known successful and optimized 5’ untranslated region (UTR)
- A codon optimized signal peptide to send the Spike protein to the right place (copied 100% from the original virus)
- A codon optimized version of the original spike, with two ‘Proline’ substitutions to make sure the protein appears in the right form
- A known successful and optimized 3’ untranslated region
- A slightly mysterious poly-A tail with an unexplained ‘linker’ in there

The codon optimization adds a lot of G and C to the mRNA. Meanwhile, using Ψ (1-methyl-3’-pseudouridylyl) instead of U helps evade our immune system, so the mRNA stays around long enough so we can actually help train the immune system.
You can read the detail in the link above, which is fascinating.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/01/2021 10:37:19
Quote
In this video Elon Musk talks about Tesla Full Self driving software remotely at a Chinese AI conference. Elon predicts that Tesla will achieve level 5 autonomy soon and sooner than people can imagine. Elon also indirectly criticizes Waymo, a googles self-driving software company. Waymo depends on LiDAR and HD maps. Most of the time, they train their self-driving software and car in simulation.

In this video he emphasizes that understanding reality is essentially a data compression process. I've mentioned this previously in this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/01/2021 02:53:06
Here are very informative videos explaining how Tesla autopilot was developed.



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/01/2021 03:30:15
Some main points I get from the videos are:
- Autopilot builds a virtual universe in its memory space to represent its surrounding environment based on data input from its sensors.
- Modular concepts are employed to increase efficiency, so many things don't have to start from scratch again everytime new feature is added.
- Building the virtual universe is done in real time which means a lot of new data is acquired, hence a lot of older data must be discarded. Therefore, to make the system work, it must compress the incoming data into meaningful and useful concepts, after filtering out noises and insignificant information.
- Those data selection requires data hierarchy like deep believe network I mentioned earlier. Higher level information (believe) determine which data from lower level believe nodes to be kept and used or discarded and ignored. It's similar to how human brain works. That's why sometimes we find it hard to convince people by simply presenting facts that contradict their existing believe system, such as flat earthers, MAGA crowd, or religious fanatics.
- The automation process is kept being automated, up into several levels of automation. We are building machines that build machines that build machines, and so on, as Ray Kurzweil called indirection. And those machines are getting better at achieveing their goals put into them. That's why it's getting more urgent for us to find a universal terminal goal, as I discuss in another thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/01/2021 05:59:48
Quote
Wherein Dr. Know-it-all discusses the work of Dr. Arthur Choi (UCLA) and others concerning the quest to understand how deep convolutional neural networks function. This new field, XAI, or explainable AI, uses decision trees, formal logic, and even tractable boolean circuits (simulated logic gates) to explain why machine learning using deep neural nets functions so well some of the time, but so poorly other times.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2021 22:32:59
Last year may have severed our connections with the physical world, but in the digital realm, AI thrived. Take NeurIps, the crown jewel of AI conferences. While lacking the usual backdrop of the dazzling mountains of British Columbia or the beaches of Barcelona, the annual AI extravaganza highlighted a slew of “big picture” problems—bias, robustness, generalization—that will encompass the field for years to come.

On the nerdier side, scientists further explored the intersection between AI and our own bodies. Core concepts in deep learning, such as backpropagation, were considered a plausible means by which our brains “assign fault” in biological networks—allowing the brain to learn. Others argued it’s high time to double-team intelligence, combining the reigning AI “golden child” method—deep learning—with other methods, such as those that guide efficient search.

Here are four areas we’re keeping our eyes on in 2021. They touch upon outstanding AI problems, such as reducing energy consumption, nixing the need for exuberant learning examples, and teaching AI some good ole’ common sense.

https://singularityhub.com/2021/01/05/2021-could-be-a-banner-year-for-ai-if-we-solve-these-4-problems/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2021 03:27:21
https://towardsdatascience.com/introduction-to-bayesian-inference-18e55311a261

Motivation
Imagine the following scenario: you are driving an ambulance to a hospital and have to decide between route A and B. In order to save your patient, you need to arrive in less than 15 minutes. If we estimate that route A takes 12 minutes and route B takes 10 minutes, which would you choose? Route B seems faster, so why not?
The information provided so far consisted of point estimates of routes A and B. Now, let’s add information about the uncertainty of each prediction: route A takes 12 min ±1min, while route B takes 10 min ±6min.
Now it seems like the prediction of route B is significantly more uncertain, eventually risking taking longer than the 15 minute limit. Adding information about uncertainty here can make us change our decision from taking route B to taking route A.

More broadly, consider the following cases:
We want to estimate a quantity which does not have a fixed value — instead, it can change between different ones
Regardless of the true value being fixed or not, we are interested in knowing the uncertainty of our estimation
The ambulance example was intended to illustrate the second case. For the first case, we can have a quick look at the work of Nobel Prize winning economist Christopher Sims. I will simply cite his student Toshiaki Watanabe:
I once asked Chris why he favoured the Bayesian approach. He replied by pointing to the Lucas critique, which argues that when government and central bank policies change, so do the model parameters, so that they should be regarded not as constants but as stochastic variables.
For both cases, Bayesian inference can be used to model our variables of interest as a whole distribution, instead of a unique value or point estimate.

Judea Pearl describes it this way, in The Book of Why [2]:
(…) Bayes’s rule is formally an elementary consequence of his definition of conditional probability. But epistemologically, it is far from elementary. It acts, in fact, as a normative rule for updating beliefs in response to evidence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2021 07:45:31
“Our approach is pretty much the exact opposite of the traditional pharmaceutical approach. With our approach, there is no drug, no poison at all – just a little program written in DNA. We’ve effectively taken targeting out of the realm of chemistry and brought it into the realm of information.”
Matthew Scholz, Co-founder & CEO, Oisín Biotechnologies

https://www.longevity.technology/promising-restorative-therapy-could-potentially-be-available-within-5-years/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/01/2021 06:57:51
The Coronavirus Is Mutating. Here’s What We Know | WSJ

Another example how an accurate virtual universe can help to accelerate research through trial and error by saving required resources, especially time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2021 07:17:28
The Promise (And Realities) Of AI / ML
https://energycentral.com/c/iu/promise-and-realities-ai-ml
Quote
Artificial Intelligence has been getting a bad rap of late, with numerous opinion pieces and articles describing how it has struggled to live up to the hype. Arguments have centered around computational cost, lack of high-quality data, and the difficulty in getting past the high nineties in percent accuracy, all resulting in the continued need to have humans in the loop.
Quote
AI & ML are simply tools for building complex (and sometimes non-linear) models that consider large amounts of information. They are most potent in applications where their pattern finding power significantly exceeds human capability. If we adjust our attitude and expectations, we can leverage their power to bring about all sorts of tangible outcomes for humanity.

With this type of re-calibration, our mission should be to use AI to help human decision makers, rather than replace them. Machine learning is now being used to build weather and climate impact models that help infrastructure managers respond with accuracy and allocate their resources efficiently. While these models do not perfectly match the ground truth, they are much more accurate and precise than simple heuristics, and can save millions of dollars through more efficient capital allocation.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2021 07:26:05
https://spectrum.ieee.org/computing/software/its-too-easy-to-hide-bias-in-deeplearning-systems
Artificial intelligence makes it hard to tell when decision-making is biased
Quote
When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. “You can select people who are interested in football, and they live in Cote d’Azur, and they were at this college, and they also like drinking,” Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that’s because Facebook doesn’t want to appear creepy; the company declined to comment for this article, so it’s hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users’ suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what’s being revealed is why the algorithm chose a particular ad to show you.

The world around us is increasingly choreographed by such algorithms. They decide what advertisements, news, and movie recommendations you see. They also help to make far more weighty decisions, determining who gets loans, jobs, or parole. And in the not-too-distant future, they may decide what medical treatment you’ll receive or how your car will navigate the streets. People want explanations for those decisions. Transparency allows developers to debug their software, end users to trust it, and regulators to make sure it’s safe and fair.

The problem is that these automated systems are becoming so frighteningly complex that it’s often very difficult to figure out why they make certain decisions. So researchers have developed algorithms for understanding these decision-making automatons, forming the new subfield of explainable AI.
Quote
In 2017, the Defense Advanced Research Projects Agency launched a US $75 million XAI project. Since then, new laws have sprung up requiring such transparency, most notably Europe’s General Data Protection Regulation, which stipulates that when organizations use personal data for “automated decision-making, including profiling,” they must disclose “meaningful information about the logic involved.” One motivation for such rules is a concern that black-box systems may be hiding evidence of illegal, or perhaps just unsavory, discriminatory practices.
Quote
As a result, XAI systems are much in demand. And better policing of decision-making algorithms would certainly be a good thing. But even if explanations are widely required, some researchers worry that systems for automated decision-making may appear to be fair when they really aren’t fair at all.

For example, a system that judges loan applications might tell you that it based its decision on your income and age, when in fact it was your race that mattered most. Such bias might arise because it reflects correlations in the data that was used to train the AI, but it must be excluded from decision-making algorithms lest they act to perpetuate unfair practices of the past.

The challenge is how to root out such unfair forms of discrimination. While it’s easy to exclude information about an applicant’s race or gender or religion, that’s often not enough. Research has shown, for example, that job applicants with names that are common among African Americans receive fewer callbacks, even when they possess the same qualifications as someone else.

A computerized résumé-screening tool might well exhibit the same kind of racial bias, even if applicants were never presented with checkboxes for race. The system may still be racially biased; it just won’t “admit” to how it really works, and will instead provide an explanation that’s more palatable.

Regardless of whether the algorithm explicitly uses protected characteristics such as race, explanations can be specifically engineered to hide problematic forms of discrimination. Some AI researchers describe this kind of duplicity as a form of “fairwashing”: presenting a possibly unfair algorithm as being fair.

 Whether deceptive systems of this kind are common or rare is unclear. They could be out there already but well hidden, or maybe the incentive for using them just isn’t great enough. No one really knows. What’s apparent, though, is that the application of more and more sophisticated forms of AI is going to make it increasingly hard to identify such threats.
Quote
No company would want to be perceived as perpetuating antiquated thinking or deep-rooted societal injustices. So a company might hesitate to share exactly how its decision-making algorithm works to avoid being accused of unjust discrimination. Companies might also hesitate to provide explanations for decisions rendered because that information would make it easier for outsiders to reverse engineer their proprietary systems. Cynthia Rudin, a computer scientist at Duke University, in Durham, N.C., who studies interpretable machine learning, says that the “explanations for credit scores are ridiculously unsatisfactory.” She believes that credit-rating agencies obscure their rationales intentionally. “They’re not going to tell you exactly how they compute that thing. That’s their secret sauce, right?”

And there’s another reason to be cagey. Once people have reverse engineered your decision-making system, they can more easily game it. Indeed, a huge industry called “search engine optimization” has been built around doing just that: altering Web pages superficially so that they rise to the top of search rankings.
What I see from the trend is that information technology is converging toward the building of a virtual universe. Competitions to become the first/biggest/best AI system builder for selfish motivations could be directed to a more collaborative efforts by promoting a universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/01/2021 11:52:07
'Liquid' machine-learning system adapts to changing conditions

Quote
MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed "liquid" networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.   
https://techxplore.com/news/2021-01-liquid-machine-learning-conditions.amp?__twitter_impression=true

Quote
  Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of "training" examples. They're often said to mimic the processing pathways of the brain—Hasani drew inspiration directly from the microscopic nematode, C. elegans. "It only has 302 neurons in its nervous system," he says, "yet it can generate unexpectedly complex dynamics."

Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations. 

In the future, we will have AI that keeps learning from real world experience, not just in training phase. They are getting more humanlike.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/01/2021 12:55:51
Quote
   Risky behaviors such as smoking, alcohol and drug use, speeding, or frequently changing sexual partners result in enormous health and economic consequences and lead to associated costs of an estimated 600 billion dollars a year in the US alone. In order to define measures that could reduce these costs, a better understanding of the basis and mechanisms of risk-taking is needed.
Quote
Specific characteristics were found in several areas of the brain: In the hypothalamus, where the release of hormones (such as orexin, oxytocin and dopamine) controls the vegetative functions of the body; in the hippocampus, which is essential for storing memories; in the dorsolateral prefrontal cortex, which plays an important role in self-control and cognitive deliberation; in the amygdala, which controls, among other things, the emotional reaction to danger; and in the ventral striatum, which is activated when processing rewards.   
Quote
  The researchers were surprised by the measurable anatomical differences they discovered in the cerebellum, an area that is not usually included in studies of risk behaviors on the assumption that it is mainly involved in fine motor functions. In recent years, however, significant doubts have been raised about this hypothesis – doubts which are now backed by the current study. 
Quote
  “It appears that the cerebellum does after all play an important role in decision-making processes such as risk-taking behavior,” confirms Aydogan. “In the brains of more risk-tolerant individuals, we found less gray matter in these areas. How this gray matter affects behavior, however, still needs to be studied further.” 
https://neurosciencenews.com/brain-risky-behavior-17633/

Risk taking is an important factor in decision making, which we need to deeply understand so it can be simulated in a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 15:28:03
Someone has come into similar conclusion as I posted here. This GME 'riot' should be a wake up call.
Quote
  Joscha Bach (@Plinz) tweeted at 11:20 AM on Fri, Jan 29, 2021:
In the long run, machine learning and a publicly accessible stock market cannot coexist
(https://twitter.com/Plinz/status/1355007909281718274?s=03) 

Quote
  Joscha Bach (@Plinz) tweeted at 7:54 PM on Fri, Jan 29, 2021:
The financial system is software executed by humans, full of holes and imperfections, and very hard to update and maintain. Using substantial computational resources to discover and exploit its imperfections will eventually nuke it into oblivion
(https://twitter.com/Plinz/status/1355137134789681158?s=03) 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 23:37:13
A financial system should be a tool to redistribute resources to optimally achieving common goals of the society. It's akin to circulatory system in multicellular organisms.
While current financial system enables innovators to thrive by convincing people to contribute to their inventions, and gain profit from them, it also enables other financial actors to gamble with someone else's money. They get profits when they win, but then get away or bailed out when they lose.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 23:44:00
A free market supposed to be a self organizing system. But if some parts of the system aggregate and accumulate enough power to manipulate or bypass self regulatory functions, they can accumulate more resources for themselves while depriving and sacrificing others, making the entire structure to collapse. It's akin to behavior of cancerous cells.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/01/2021 03:28:17
https://www.engadget.com/autox-fully-driverless-robotaxi-china-145126521.html
Quote
  Driverless robotaxis are now available for public rides in China
AutoX is the first in the country to offer rides without safety drivers. 
Quote
  After lots of tests, it’s now possible to hail a truly driverless robotaxi in China. AutoX has become the first in the country to offer public rides in autonomous vehicles without safety drivers. You’ll need to sign up for a pilot program in Shenzhen and use membership credits, but after that you can hop in a modified Chrysler Pacifica to travel across town without seeing another human being. 

Quote
  Fully driverless robotaxis are still very rare anywhere in the world, and it’ll take a combination of refined technology and updated regulation before they’re relatively commonplace. This is an important step in that direction, though. They might get a boost in the current climate, though. The COVID-19 pandemic has added risk to conventional ride hailing for both drivers and passengers, and removing drivers could make this one of the safest travel options for people without cars of their own. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/01/2021 05:56:09
https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/

Quote
  In the blog post where it declared the GPT-3 API, OpenAI stated three key reasons for not open-sourcing the deep learning model. The first was, obviously, to cover the costs of their ongoing research. Second, but equally important, is running GPT-3 requires vast compute resources that many companies don’t have. Third (which I won’t get into in this post) is to prevent misuse and harmful applications.

Based on this information, we know that to make GPT-3 profitable, OpenAI will need to break even on the costs of research and development, and also find a business model that turns in profits on the expenses of running the model. 

Quote
  In general, machine learning algorithms can perform a single, narrowly defined task. This is especially true for natural language processing, which is much more complicated than other fields of artificial intelligence. To repurpose a machine learning model for a new task, you must retrain it from scratch or fine-tune it with new examples, a process known as transfer learning.

But contrary to other machine learning models, GPT-3 is capable of zero-shot learning, which means it can perform many new tasks without the need for new training. For many other tasks, it can perform one-shot learning: Give it one example and it will be able to expand to other similar tasks. Theoretically, this makes it ideal as a general-purpose AI technology that can support many new applications.

Significant portion of their research budget goes to the stellar salaries OpenAI has to pay the highly coveted AI talent it has hired for the task. I wonder how long it would take for the AGI to surpass the capability of its own creators, so human AI talents are no longer needed. It looks like they are facing a dilemma. If they don't do it, their competitors are ready to surpass them, which would make their past and current efforts meaningless.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/02/2021 11:21:29
https://venturebeat.com/2021/01/28/ai-holds-the-key-to-even-better-ai/
Quote
  For all the talk about how artificial intelligence technology is transforming entire industries, the reality is that most businesses struggle to obtain real value from AI. 65% of organizations that have invested in AI in recent years haven’t yet seen any tangible gains from those investments, according to a 2019 survey conducted by MIT Sloan Management Review and the Boston Consulting Group. And a quarter of businesses implementing AI projects see at least 50% of those projects fail, with “lack of skilled staff” and “unrealistic expectations” among the top reasons for failure, per research from IDC. 
Quote
  Encouragingly, AI is already being leveraged to simplify other tech-related tasks, like writing and reviewing code (which itself is built by AI). The next phase of the deep learning revolution will involve similar complementary tools. Over the next five years, expect to see such capabilities slowly become available commercially to the public. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/02/2021 06:35:31
https://www.linkedin.com/pulse/fake-news-rampant-here-how-artificial-intelligence-ai-bernard-marr/
Quote
One of the latest collaborations between artificial intelligence and humans is further evidence of how machines and humans can create better results when working together. Artificial intelligence (AI) is now on the job to combat the spread of misinformation on the internet and social platforms thanks to the efforts of start-ups such as Logically. While AI is able to analyze the enormous amounts of info generated daily on a scale that's impossible for humans, ultimately, humans need to be part of the process of fact-checking to ensure credibility. As Lyric Jain, founder and CEO of Logically, said, toxic news travels faster than the truth. Our world desperately needs a way to discern truth from fiction in our news and public, political and economic discussions, and artificial intelligence will help us do that.
Quote
The Fake News “Infodemic”

People are inundated with info every single day. Each minute, there are 98,000 tweets, 160 million emails sent, and 600 videos uploaded to YouTube. Politicians. Marketers. News outlets. Plus, there are countless individuals spewing their opinions since self-publishing is so easy. People crave a way to sort through all the information to find valuable nuggets they can use in their own life. They want facts, and companies are starting to respond often by using machine learning and AI tools.
Quote
As the pursuit of fighting fake news becomes more sophisticated, technology leaders will continue to work to find even better ways to sort out fact from fiction also well as refine the AI tools that can help fight disinformation. Deep learning can help automate some of the steps in fake news detection, according to a team of researchers at DarwinAI and Canada's University of Waterloo. They are segmenting fact-checking into various sub-tasks, including stance detection where the system is given a claim on a news story plus other stories on the same subject to determine if those other stories support or refute the claim in the original piece.
As long as we believe that there's an objective reality, we will need reliable information sources which reflect it accurately, or at least are consistent with each other. This trend seems to keep getting us closer to a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/02/2021 06:40:03
This is why we need the ability to distinguish between objective reality vs alternative reality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/02/2021 12:33:13
To simulate the universe, it is necessary to simulate consciousness as well, and we need to understand it first.

A new theory of brain organization takes aim at the mystery of consciousness

https://neurosciencenews.com/brain-organization-consciousness-15132/
Quote
Consciousness is one of the brain’s most enigmatic mysteries. A new theory, inspired by thermodynamics, takes a high-level perspective of how neural networks in the brain transiently organize to give rise to memories, thought and consciousness.

The key to awareness is the ebb and flow of energy: when neurons functionally tag together to support information processing, their activity patterns synchronize like ocean waves. This process is inherently guided by thermodynamic principles, which — like an invisible hand — promotes neural connections that favors conscious awareness. Disruptions in this process breaks down communication between neural networks, giving rise to neurological disorders such as epilepsy, autism or schizophrenia.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/02/2021 10:59:28
https://www.quantamagazine.org/brains-background-noise-may-hold-clues-to-persistent-mysteries-20210208/

Quote

Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries

NEUROSCIENCE
Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries
By
ELIZABETH LANDAU
February 8, 2021

By digging out signals hidden within the brain’s electrical chatter, scientists are getting new insights into sleep, aging and more.

An illustration of a human brain against “pink noise” static.
Olena Shmahalo/Quanta Magazine; noise generated by Thomas Donoghue
At a sleep research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.

Lendner argued, though, that the answer isn’t in the regular brain waves, but rather in an aspect of neural activity that scientists might normally ignore: the erratic background noise.

Some researchers seemed incredulous. “They said, ‘So, you’re telling me that there’s, like, information in the noise?’” said Lendner, an anesthesiology resident at the University Medical Center in Tübingen, Germany, who recently completed a postdoc at the University of California, Berkeley. “I said, ‘Yes. Someone’s noise is another one’s signal.’
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/02/2021 12:53:10
Mind Reading For Brain-To-Text Communication!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 05:08:57
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
The learning algorithm that enables the runaway success of deep neural networks doesn’t work in biological brains, but researchers are finding alternatives that could.

Quote
Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.

But real brains are highly unlikely to be relying on the same algorithm. It’s not just that “brains are able to generalize and learn better and faster than the state-of-the-art AI systems,” said Yoshua Bengio, a computer scientist at the University of Montreal, the scientific director of the Quebec Artificial Intelligence Institute and one of the organizers of the 2007 workshop. For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Simulating-a-Neuron.svg)
Quote
However, it was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.

The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.
To understand this process, think of a “loss function” that describes the difference between the inferred and desired outputs as a landscape of hills and valleys. When a network makes an inference with a given set of synaptic weights, it ends up at some location on the loss landscape. To learn, it needs to move down the slope, or gradient, toward some valley, where the loss is minimized to the extent possible. Backpropagation is a method for updating the synaptic weights to descend that gradient.

In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance. This calculation proceeds sequentially backward from the output layer to the input layer, hence the name backpropagation. Do this over and over for sets of inputs and desired outputs, and you’ll eventually arrive at an acceptable set of weights for the entire neural network.
Quote
Impossible for the Brain
The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”

Backprop is considered biologically implausible for several major reasons. The first is that while computers can easily implement the algorithm in two phases, doing so for biological neural networks is not trivial. The second is what computational neuroscientists call the weight transport problem: The backprop algorithm copies or “transports” information about all the synaptic weights involved in an inference and updates those weights for more accuracy. But in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output. From a neuron’s point of view, “it’s OK to know your own synaptic weights,” said Yamins. “What’s not okay is for you to know some other neuron’s set of synaptic weights.”

(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Backpropagation.svg)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 05:28:05
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
Predicting Perceptions
The constraint that neurons can learn only by reacting to their local environment also finds expression in new theories of how the brain perceives. Beren Millidge, a doctoral student at the University of Edinburgh and a visiting fellow at the University of Sussex, and his colleagues have been reconciling this new view of perception — called predictive coding — with the requirements of backpropagation. “Predictive coding, if it’s set up in a certain way, will give you a biologically plausible learning rule,” said Millidge.

Predictive coding posits that the brain is constantly making predictions about the causes of sensory inputs. The process involves hierarchical layers of neural processing. To produce a certain output, each layer has to predict the neural activity of the layer below. If the highest layer expects to see a face, it predicts the activity of the layer below that can justify this perception. The layer below makes similar predictions about what to expect from the one beneath it, and so on. The lowest layer makes predictions about actual sensory input — say, the photons falling on the retina. In this way, predictions flow from the higher layers down to the lower layers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 09:05:16
Quote
Pyramidal Neurons
Some scientists have taken on the nitty-gritty task of building backprop-like models based on the known properties of individual neurons. Standard neurons have dendrites that collect information from the axons of other neurons. The dendrites transmit signals to the neuron’s cell body, where the signals are integrated. That may or may not result in a spike, or action potential, going out on the neuron’s axon to the dendrites of post-synaptic neurons.

But not all neurons have exactly this structure. In particular, pyramidal neurons — the most abundant type of neuron in the cortex — are distinctly different. Pyramidal neurons have a treelike structure with two distinct sets of dendrites. The trunk reaches up and branches into what are called apical dendrites. The root reaches down and branches into basal dendrites.
(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Neurons.svg)
Quote
Models developed independently by Kording in 2001, and more recently by Blake Richards of McGill University and the Quebec Artificial Intelligence Institute and his colleagues, have shown that pyramidal neurons could form the basic units of a deep learning network by doing both forward and backward computations simultaneously. The key is in the separation of the signals entering the neuron for forward-going inference and for backward-flowing errors, which could be handled in the model by the basal and apical dendrites, respectively. Information for both signals can be encoded in the spikes of electrical activity that the neuron sends down its axon as an output.

In the latest work from Richards’ team, “we’ve gotten to the point where we can show that, using fairly realistic simulations of neurons, you can train networks of pyramidal neurons to do various tasks,” said Richards. “And then using slightly more abstract versions of these models, we can get networks of pyramidal neurons to learn the sort of difficult tasks that people do in machine learning.”
There are so much information densely packed into a single article. I found it hard to compress it any further.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 09:18:48
Quote
The Role of Attention
An implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons. But “there is no teacher in the brain that tells every neuron in the motor cortex, ‘You should be switched on and you should be switched off,’” said Pieter Roelfsema of the Netherlands Institute for Neuroscience in Amsterdam.
Quote
Roelfsema thinks the brain’s solution to the problem is in the process of attention. In the late 1990s, he and his colleagues showed that when monkeys fix their gaze on an object, neurons that represent that object in the cortex become more active. The monkey’s act of focusing its attention produces a feedback signal for the responsible neurons. “It is a highly selective feedback signal,” said Roelfsema. “It’s not an error signal. It is just saying to all those neurons: You’re going to be held responsible [for an action].”

Roelfsema’s insight was that this feedback signal could enable backprop-like learning when combined with processes revealed in certain other neuroscientific findings. For example, Wolfram Schultz of the University of Cambridge and others have shown that when animals perform an action that yields better results than expected, the brain’s dopamine system is activated. “It floods the whole brain with neural modulators,” said Roelfsema. The dopamine levels act like a global reinforcement signal.

In theory, the attentional feedback signal could prime only those neurons responsible for an action to respond to the global reinforcement signal by updating their synaptic weights, said Roelfsema. He and his colleagues have used this idea to build a deep neural network and study its mathematical properties. “It turns out you get error backpropagation. You get basically the same equation,” he said. “But now it became biologically plausible.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/03/2021 08:05:07
Imagine how much you can gain just from the stock market, if you have clear insight of what would happen in the future.
This video was from 2010.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2021 11:49:00
Taming Transformers for High-Resolution Image Synthesis

It seems like we are getting better at building information processors comparable to human brains.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2021 12:25:39
In not so distant future, most information available online will be generated by AI.

That prediction will force us to build a virtual universe which is intended to accurately represent objective reality. Otherwise, there will be no way to distinguish facts from fictions, especially for something which are not widely known already.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/03/2021 07:31:14
Has Google Search changed much since 1998?
This video shows how Google has evolved to getting closer to building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/03/2021 08:58:40
Tesla's Autopilot, Full Self Driving, Neural Networks & Dojo
Quote
In this video I react to a discussion from the Lex Fridman podcast with legendary chip designer Jim Keller (ex-Tesla) sharing their thoughts on computer vision, neural networks, Tesla's autopilot and full self driving software (and hardware), autonomous vehicles, deep learning and Tesla Dojo (Tesla's dojo is a training system).
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/03/2021 12:08:20
More reason to replace the lawmakers with AI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/03/2021 02:44:55
The Most Advanced Digital Government in the World
Quote
A small European country is leading the world in establishing an “e-government” for its citizens.

Estonia's fully online, e-government system has been revolutionary for the country's citizens, making tasks like voting, filing taxes, and renewing a driver’s license quick and convenient.

In operation since 2001, “e-Estonia” is now a well-oiled, digital machine. Estonia was the first country to hold a nationwide election online, and ministers dictate decisions via an e-Cabinet.

Estonia was also the first country to declare internet access a human right. 99% of public services are available digitally 24/7, excluding only marriage, divorce, and real-estate transactions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/03/2021 22:43:41
https://www.nextplatform.com/2021/03/11/its-time-to-start-paying-attention-to-vector-databases/amp/

Quote
The concepts underpinning vector databases are decades old, but it is only relatively recently that these are the underlying “secret weapon” of the largest webscale companies that provide services like search and near real-time recommendations.

Like all good clandestine competitive tools, the vector databases that support these large companies are all purpose-built in-house, optimized for the types of similarity search operations native to their business (content, physical products, etc.).

These custom-tailored vector databases are the “unsung hero of big machine learning,” says Edo Liberty, who built tools like this at Yahoo Research during its scalable machine learning platform journey. He carried some of this over to AWS, where he ran Amazon AI labs and helped cobble together standards like AWS Sagemaker, all the while learning how vector databases could integrate with other platforms and connect with the cloud.

“Vector databases are a core piece of infrastructure that fuels every big machine learning deployment in industry. There was never a way to do this directly, everyone just had to build their own in-house,” he tells The Next Platform. The funny thing is, he was working on high dimensional geometry during his PhD days; the AI/ML renaissance just happened to perfectly intersect with exactly that type of work.

“In ML, suddenly everything was being represented as these high-dimensional vectors, that quickly became a huge source of data, so it you want to search, rank or give recommendations, the object in your actual database wasn’t a document or an image—it was this mathematical representation of the machine learning model.” In short, this quickly became important for a lot of companies.
I think that the virtual universe would be built upon vector database foundation at its core system. This assessment is based on my experience in some system migration projects, which pushed me to reverse engineer a system database to make a tool to accelerate the process by automating some tasks.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2021 09:58:17
Quote
The Senate filibuster is one of the biggest things standing in the way of anti-voter suppression laws, raising the minimum wage and immigration reform. What is this loophole, and how does it affect governing today?
Lawmaking process obviously needs to get more efficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2021 21:36:28
ISO standard basically said that you've got to document everything and track it. Write what you do, do what you write.
What you write is a virtual version of what you do. In the past, they are on papers. Now they are in computer data storages.
This virtual version of the real world supposed to be easier to process, aggregate, simulate, extract, to produce required information in decision making process. To be useful, they must have adequate accuracy and precision.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2021 02:58:52
https://www.wired.co.uk/article/marcus-du-sautoy-maths-proofs
Quote
Maths nerds, get ready: an AI is about to write its own proofs
We'll see the first truly creative proof of a mathematical theorem written by an artificial intelligence – and soon

It might come as a surprise to some people that this prediction hasn’t already come to pass. Given that mathematics is a subject of logic and precision, it would seem to be perfect territory for a computer.

However, in 2021, we will see the first truly creative proof of a mathematical theorem by an artificial intelligence (AI). As a mathematician, this fills me with excitement and anxiety in equal measure. Excitement for the new insights that AI might give the mathematical community; anxiety that we human mathematicians might soon become obsolete. But part of this belief is based on a misconception about what a mathematician does.

More recently, techniques of machine learning have been used to gain an understanding from a database of successful proofs to generate more proofs. But although the proofs are new, they do not pass the test of exciting the mathematical mind. It’s the same for powerful algorithms, which can generate convincing short-form text, but are a long way from writing a novel.

But in 2021 I think we will see – or at least be close to – an algorithm with the ability to write its first mathematical story. Storytelling through the written word is based on millions of years of human evolution, and it takes a human many years to reach the maturity to write a novel. But mathematics is a much younger evolutionary development. A person immersed in the mathematical world can reach maturity quite quickly, which is why one sees mathematical breakthroughs made by young minds.


This is why I think that it won’t take long for an AI to understand the quality of the proofs we love and celebrate, before it too will be writing proofs. Perhaps, given its internal architecture, these may be mathematical theorems about networks – a subject that deserves its place on the shelves of the mathematical libraries we humans have been filling for centuries.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/03/2021 09:13:53
Quote
What is love and what defines art? Humans have theorized, debated, and argued over these questions for centuries. As researchers become closer and closer to boiling these concepts down to a science, A.I. projects become closer to becoming alternatives for romantic companions and artists in their own right.

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

0:00​ Introduction
0:50​ The Model Companion
11:02​ Can A.I. Make Real Art?
23:05​ The Autonomous Supercar
36:41​ The Hard Problem
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/03/2021 06:01:22
5 Crazy Simulations That Were Previously Impossible
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 03:20:22
https://scitechdaily.com/300-covid-19-machine-learning-models-have-been-developed-none-is-suitable-for-detecting-or-diagnosing/
Quote
Machine learning is a promising and potentially powerful technique for detection and prognosis of disease. Machine learning methods, including where imaging and other data streams are combined with large electronic health databases, could enable a personalized approach to medicine through improved diagnosis and prediction of individual responses to therapies.

“However, any machine learning algorithm is only as good as the data it’s trained on,” said first author Dr. Michael Roberts from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “Especially for a brand-new disease like COVID-19, it’s vital that the training data is as diverse as possible because, as we’ve seen throughout this pandemic, there are many different factors that affect what the disease looks like and how it behaves.”

“The international machine learning community went to enormous efforts to tackle the COVID-19 pandemic using machine learning,” said joint senior author Dr James Rudd, from Cambridge’s Department of Medicine. “These early studies show promise, but they suffer from a high prevalence of deficiencies in methodology and reporting, with none of the literature we reviewed reaching the threshold of robustness and reproducibility essential to support use in clinical practice.”

Many of the studies were hampered by issues with poor quality data, poor application of machine learning methodology, poor reproducibility, and biases in study design. For example, several training datasets used images from children for their ‘non-COVID-19’ data and images from adults for their COVID-19 data. “However, since children are far less likely to get COVID-19 than adults, all the machine learning model could usefully do was to tell the difference between children and adults, since including images from children made the model highly biased,” said Roberts.

Many of the machine learning models were trained on sample datasets that were too small to be effective. “In the early days of the pandemic, there was such a hunger for information, and some publications were no doubt rushed,” said Rudd. “But if you’re basing your model on data from a single hospital, it might not work on data from a hospital in the next town over: the data needs to be diverse and ideally international, or else you’re setting your machine learning model up to fail when it’s tested more widely.”

In many cases, the studies did not specify where their data had come from, or the models were trained and tested on the same data, or they were based on publicly available ‘Frankenstein datasets’ that had evolved and merged over time, making it impossible to reproduce the initial results.
Title: Re: How close are we from building a virtual universe?
Post by: Michael Sally on 29/03/2021 03:37:49
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.

I read a paper recently that describes the core of a photon as (x0,y0,z0)  which I thought was quite impressive in regards to accuracy and precision .

In regards to a virtual universe I consider that would be the smallest element of possible information , a tuple .

I would then consider that any  other elements of informational dimensions would be n-tuples  (xn,yn,zn) 

My reasoning for this is that any amount of information greater than the (x0,y0,z0) element , is expansive information .

(x1,y1,z1......n)

In simple terms a point of information reads true (absolute answers)  where expansions of information reads false (speculative) .

In example c reads false , c is based on our measurement system. In simultaneity a duration of 1.s is arguable .









Title: Re: How close are we from building a virtual universe?
Post by: Kryptid on 29/03/2021 05:54:14
In simple terms a point of information reads true (absolute answers)  where expansions of information reads false (speculative) .

According to what source?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 06:40:28
Where Did Bitcoin Come From? – The True Story
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 10:38:02
What a digital government looks like
Quote
What if you never had to fill out paperwork again? In Estonia, this is a reality: citizens conduct nearly all public services online, from starting a business to voting from their laptops, thanks to the nation's ambitious post-Soviet digital transformation known as "e-Estonia." One of the program's experts, Anna Piperal, explains the key design principles that power the country's "e-government" -- and shows why the rest of the world should follow suit to eradicate outdated bureaucracy and regain citizens' trust.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 10:55:29
MIT 6.S191: Evidential Deep Learning and Uncertainty

MIT Introduction to Deep Learning 6.S191: Lecture 7
Evidential Deep Learning and Uncertainty Estimation
Lecturer: Alexander Amini
January 2021

For all lectures, slides, and lab materials: http://introtodeeplearning.com​​

Lecture Outline
0:00​​ - Introduction and motivation
5:00​​ - Outline for lecture
5:50​ - Probabilistic learning
8:33​ - Discrete vs continuous target learning
14:12​ - Likelihood vs confidence
17:40​ - Types of uncertainty
21:15​ - Aleatoric vs epistemic uncertainty
22:35​ - Bayesian neural networks
28:55​ - Beyond sampling for uncertainty
31:40​ - Evidential deep learning
33:29​ - Evidential learning for regression and classification
42:05​ - Evidential model and training
45:06​ - Applications of evidential learning
46:25​ - Comparison of uncertainty estimation approaches
47:47​ - Conclusion
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/03/2021 12:04:30
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

Graph Databases Will Change Your Freakin' Life (Best Intro Into Graph Databases)

Quote
## WTF is a graph database
- Euler and Graph Theory
- Math -- it's hard, let's skip it
- It's about data -- lots of it
- But let's zoom in and look at the basics
## Relational model vs graph model
- How do we represent THINGS in DBs
- Relational vs Graph
- Nodes and Relationships
## Why use a graph over a relational DB or other NoSQL?
- Very simple compared to RDBMS, and much more flexible
- The real power is in relationship-focused data (most NoSQL dbs don't treat relationships as first-order)
- As related-ness and amount of data increases, so does advantage of Graph DBs
- Much closer to our whiteboard model

EVENT: Nodevember 2016

SPEAKER: Ed Finkler
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2021 13:10:09
https://scitechdaily.com/explainable-artificial-intelligence-for-decoding-regulatory-instructions-in-dna/
Quote
Opening the black box to uncover the rules of the genome’s regulatory code.
Researchers at the Stowers Institute for Medical Research, in collaboration with colleagues at Stanford University and Technical University of Munich, have developed advanced explainable artificial intelligence (AI) in a technical tour de force to decipher regulatory instructions encoded in DNA. In a report published online on February 18, 2021, in Nature Genetics, the team found that a neural network trained on high-resolution maps of protein-DNA interactions can uncover subtle DNA sequence patterns throughout the genome and provide a deeper understanding of how these sequences are organized to regulate genes.

Neural networks are powerful AI models that can learn complex patterns from diverse types of data such as images, speech signals, or text to predict associated properties with impressive high accuracy. However, many see these models as uninterpretable since the learned predictive patterns are hard to extract from the model. This black-box nature has hindered the wide application of neural networks to biology, where interpretation of predictive patterns is paramount.

One of the big unsolved problems in biology is the genome’s second code—its regulatory code. DNA bases (commonly represented by letters A, C, G, and T) encode not only the instructions for how to build proteins, but also when and where to make these proteins in an organism. The regulatory code is read by proteins called transcription factors that bind to short stretches of DNA called motifs. However, how particular combinations and arrangements of motifs specify regulatory activity is an extremely complex problem that has been hard to pin down.

Now, an interdisciplinary team of biologists and computational researchers led by Stowers Investigator Julia Zeitlinger, PhD, and Anshul Kundaje, PhD, from Stanford University, have designed a neural network—named BPNet for Base Pair Network—that can be interpreted to reveal regulatory code by predicting transcription factor binding from DNA sequences with unprecedented accuracy. The key was to perform transcription factor-DNA binding experiments and computational modeling at the highest possible resolution, down to the level of individual DNA bases. This increased resolution allowed them to develop new interpretation tools to extract the key elemental sequence patterns such as transcription factor binding motifs and the combinatorial rules by which motifs function together as a regulatory code.

Quote
“More traditional bioinformatics approaches model data using pre-defined rigid rules that are based on existing knowledge. However, biology is extremely rich and complicated,” says Avsec. “By using neural networks, we can train much more flexible and nuanced models that learn complex patterns from scratch without previous knowledge, thereby allowing novel discoveries.“

BPNet’s network architecture is similar to that of neural networks used for facial recognition in images. For instance, the neural network first detects edges in the pixels, then learns how edges form facial elements like the eye, nose, or mouth, and finally detects how facial elements together form a face. Instead of learning from pixels, BPNet learns from the raw DNA sequence and learns to detect sequence motifs and eventually the higher-order rules by which the elements predict the base-resolution binding data.

Once the model is trained to be highly accurate, the learned patterns are extracted with interpretation tools. The output signal is traced back to the input sequences to reveal sequence motifs. The final step is to use the model as an oracle and systematically query it with specific DNA sequence designs, similar to what one would do to test hypotheses experimentally, to reveal the rules by which sequence motifs function in a combinatorial manner.

“The beauty is that the model can predict way more sequence designs that we could test experimentally,” Zeitlinger says. “Furthermore, by predicting the outcome of experimental perturbations, we can identify the experiments that are most informative to validate the model.” Indeed, with the help of CRISPR gene editing techniques, the researchers confirmed experimentally that the model’s predictions were highly accurate.

Since the approach is flexible and applicable to a variety of different data types and cell types, it promises to lead to a rapidly growing understanding of the regulatory code and how genetic variation impacts gene regulation. Both the Zeitlinger Lab and the Kundaje Lab are already using BPNet to reliably identify binding motifs for other cell types, relate motifs to biophysical parameters, and learn other structural features in the genome such as those associated with DNA packaging. To enable other scientists to use BPNet and adapt it for their own needs, the researchers have made the entire software framework available with documentation and tutorials.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2021 13:13:23
In regards to a virtual universe I consider that would be the smallest element of possible information , a tuple .
AFAIK, the smallest unit of information is a bit, or binary digit, which is supposed to reduce uncertainty by half.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2021 13:55:32
Graph databases: The best kept secret for effective AI
Quote
Emil Eifrem, Neo4j Co-Founder and CEO explains why connected data is the key to more accurate, efficient and credible learning systems. Using real world use cases ranging from space engineering to investigative journalism, he will outline how a relationships-first approach adds context to data - the key to explainable, well-informed predictions.
What I've tried to do previously was basically creating a graph database using a standard relational database system. If only I knew this earlier, I might have saved significant amount of my time and effort. It makes me feel like I tried to reinvent the wheel.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2021 14:11:48
What Is Edge Computing?
Quote
Another jargon busting video - Here I explain in simple terms what edge computing or sometimes called fog computing is. I provide practical examples of computing at the edge of the network - in phones, cameras, etc.
In the future, human brains will be part of edge computing network, which itself is part of universal consciousness running the virtual universe. No single human individual has the capability of running the kernel and core processes of the virtual universe which would run on cloud computing servers due to sheer data size and parallel data processing power requirement. To make significant contributions, we would have to establish direct communication interface with computer to increase data exchange rate, which would break the natural limit of biomechanical systems currently used, such as typing, hand gesture, reading, hearing, or voice commands.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2021 07:01:35
A Practical Guide to Graph Databases - David Bechberger
Quote
With the emergence of offerings on both AWS (Neptune) and Azure (CosmosDB) within the past year it is fair to say that graph databases are of the hottest trends and that they are here to stay. So what are graph databases all about then? You can read article after article about how great they are and that they will solve all your problems better than your relational database but its difficult to really find any practical information about them.
This talk will start with a short primer on graph databases and the ecosystem but will then quickly transition to discussing the practical aspects of how to apply them to solve real world business problems. We will dive into what makes a good use case and what does not. We will then follow this up with some real world examples of some of the common patterns and anti-patterns of using graph databases. If you haven't been scared away by this point we will end by showing you some of the powerful insights that graph databases can provide you.
I wish I knew this back then so I can save my time trying to emulate a graph database using traditional relational database.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2021 06:43:26
Quote
Edge computing places workloads closer to where data is created and where actions need to be taken. It addresses the unprecedented scale and complexity of data created by connected devices. As more and more data comes in from remote IoT edge devices and servers, it’s important to act on the data quickly. Acting quickly can help companies seize new business opportunities, increase operational efficiency and improve customer experiences.

In this video, Rob High, IBM Fellow and CTO, provides insights into the basic concepts and key use cases of edge computing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 11:12:57
https://singularityhub.com/2021/04/04/openais-gpt-3-algorithm-is-now-producing-billions-of-words-a-day/

Quote
When OpenAI released its huge natural-language algorithm GPT-3 last summer, jaws dropped. Coders and developers with special access to an early API rapidly discovered new (and unexpected) things GPT-3 could do with naught but a prompt. It wrote passable poetry, produced decent code, calculated simple sums, and with some edits, penned news articles.

All this, it turns out, was just the beginning. In a recent blog post update, OpenAI said that tens of thousands of developers are now making apps on the GPT-3 platform.

Over 300 apps (and counting) use GPT-3, and the algorithm is generating 4.5 billion words a day for them.
Quote
The Coming Torrent of Algorithmic Content
Each month, users publish about 70 million posts on WordPress, which is, hands down, the dominant content management system online.

Assuming an average article is 800 words long—which is speculation on my part, but not super long or short—people are churning out some 56 billion words a month or 1.8 billion words a day on WordPress.

If our average word count assumption is in the ballpark, then GPT-3 is producing over twice the daily word count of WordPress posts. Even if you make the average more like 2,000 words per article (which seems high to me) the two are roughly equivalent.

Now, not every word GPT-3 produces is a word worth reading, and it’s not necessarily producing blog posts (more on applications below). But in either case, just nine months in, GPT-3’s output seems to foreshadow a looming torrent of algorithmic content.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 13:06:06
https://siliconangle.com/2021/04/10/new-era-innovation-moores-law-not-dead-ai-ready-explode/
Quote
Processing goes to the edge – networks and storage become the bottlenecks
We recently reported Microsoft Corp. Chief Executive Satya Nadella’s epic quote that we’ve reached peak centralization. The graphic below paints a picture that is telling. We just shared above that processing power is accelerating at unprecedented rates. And costs are dropping like a rock. Apple’s A14 costs the company $50 per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators that will optimize energy use and save 10% annually on power consumption. They said that chip will cost $1 — a buck to shave 10% off your electricity bill from the fridge.
(https://d2axcg2cspgbkk.cloudfront.net/wp-content/uploads/Breaking-Analysis_-Moores-Law-is-Accelerating-and-AI-is-Ready-to-Explode-3.jpg)
Quote
Processing is plentiful and cheap. But look at where the expensive bottlenecks are: networks and storage. So what does this mean?

It means that processing is going to get pushed to the edge – wherever the data is born. Storage and networking will become increasingly distributed and decentralized. With custom silicon and processing power placed throughout the system with AI embedded to optimize workloads for latency, performance, bandwidth, security and other dimensions of value.

And remember, most of the data – 99% – will stay at the edge. We like to use Tesla Inc. as an example. The vast majority of data a Tesla car creates will never go back to the cloud. It doesn’t even get persisted. Tesla saves perhaps five minutes of data. But some data will connect occasionally back to the cloud to train AI models – we’ll come back to that.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 13:08:21
Quote
Massive increases in processing power and cheap silicon will power the next wave of AI, machine intelligence, machine learning and deep learning.
Quote
We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this:

There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor.

It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time.

Deep learning is a more advanced type of machine learning that uses more complex math.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2021 12:20:28

https://pub.towardsai.net/openai-brings-introspection-to-reinforcement-learning-agents-39cbe4cf2af3

Quote
OpenAI Brings Introspection to Reinforcement Learning Agents
The research around Evolved Policy Gradients attempts to recreate introspection in reinforcement learning models.

Introspection is one of those magical cognitive abilities that differentiate humans from other species. Conceptually, introspection can be defined as the ability to examine conscious thoughts and feelings. Introspection also plays a pivotal role in how humans learn. Have you ever tried to self-learn a new skill such as learning a new language? Even without any external feedback, you can quickly assess whether you are making progress on aspects such as vocabulary or pronunciation. Wouldn’t it be great if we could apply some of the principles of introspection to artificial intelligence(AI) discplines such as reinforcement learning (RL)?
The magic of introspection comes from the fact that humans have access to very well shaped internal reward functions, derived from prior experience on other tasks, and through the course of biological evolution. That model highly contrasts with RL agents that are fundamentally coded to start from scratch on any learning task relying mainly on external feedback. Not surprisingly, most RL models take substantially more time than humans to learn similar tasks. Recently, researchers from OpenAI published a new paper that proposes a method to address this challenge by creating RL models that know what it means to make progress on a new task, by having experienced making progress on similar tasks in the past.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2021 13:53:38
How Graph Technology Is Changing Artificial Intelligence and Machine Learning


Quote
Graph enhancements to Artificial Intelligence and Machine Learning are changing the landscape of intelligent applications. Beyond improving accuracy and modeling speed, graph technologies make building AI solutions more accessible. Join us to hear about 6 areas at the forefront of graph enhanced AI and ML, and find out which techniques are commonly used today and which hold the potential for disrupting industries.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2021 14:37:41

Edge computing definitions and concepts. This non-technical video focuses on edge computing and cloud computing, as well as edge computing and the deployment of vision recognition and other AI applications. Also introduced are mesh networks, SBC (single board computer) edge hardware, and fog computing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2021 21:01:23
https://syncedreview.com/2021/04/07/deepmind-microsoft-allen-ai-uw-researchers-convert-pretrained-transformers-into-rnns-lowering-memory-cost-while-retaining-high-accuracy/
Quote
Powerful transformer models have been widely used in autoregressive generation, where they have advanced the state-of-the-art beyond recurrent neural networks (RNNs). However, because the output words for these models are incrementally predicted conditioned on the prefix, the generation requires quadratic time complexity with regard to sequence length.

As the performance of transformer models increasingly relies on large-scale pretrained transformers, this long sequence generation issue has become increasingly problematic. To address this, a research team from the University of Washington, Microsoft, DeepMind and Allen Institute for AI have developed a method to convert a pretrained transformer into an efficient RNN. Their Transformer-to-RNN (T2R) approach speeds up generation and reduces memory cost.
Quote
Overall, the results validated that T2R achieves efficient autoregressive generation while retaining high accuracy, proving that large-scale pretrained models can be compressed into efficient inference models that facilitate downstream applications.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/04/2021 13:37:43
https://techxplore.com/news/2021-04-deep-learning-code-humans.html
Toward deep-learning models that can reason about code more like humans
Quote
Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that's driving the push to automate some of the easier tasks that take up their time.
Quote
A machine capable of programming itself once seemed like science fiction. But an exponential rise in computing power, advances in natural language processing, and a glut of free code on the internet have made it possible to automate at least some aspects of software design.
Trained on GitHub and other program-sharing websites, code-processing models learn to generate programs just as other language models learn to write news stories or poetry. This allows them to act as a smart assistant, predicting what software developers will do next, and offering an assist. They might suggest programs that fit the task at hand, or generate program summaries to document how the software works. Code-processing models can also be trained to find and fix bugs. But despite their potential to boost productivity and improve software quality, they pose security risks that researchers are just starting to uncover.
Quote
"Our framework for attacking the model, and retraining it on those particular exploits, could potentially help code-processing models get a better grasp of the program's intent," says Liu, co-senior author of the study. "That's an exciting direction waiting to be explored."

In the background, a larger question remains: what exactly are these black-box deep-learning models learning? "Do they reason about code the way humans do, and if not, how can we make them?" says O'Reilly. "That's the grand challenge ahead for us."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/04/2021 12:10:32
Top Use Cases of Graph Databases
Quote
Jonny Cheetham, Sales Director: Graph databases are a rising tide in the world of big data insights, and the enterprises that tap into their power realize significant competitive advantages.
So how might your enterprise leverage graph databases to generate competitive insights and derive significant business value from your connected data? This webinar will show you the top five most impactful and profitable use cases of graph databases.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/04/2021 08:03:07
Do Neural Networks Think Like Our Brain? OpenAI Answers!
https://openai.com/blog/multimodal-neurons/
Quote
Multimodal Neurons in Artificial Neural Networks
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.

Quote
Our discovery of multimodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems—abstraction. We discover that the highest layers of CLIP organize images as a loose semantic collection of ideas, providing a simple explanation for both the model’s versatility and the representation’s compactness.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/04/2021 05:28:20
3D deep neural network precisely reconstructs freely-behaving animal's movements
Quote
Animals are constantly moving and behaving in response to instructions from the brain. But while there are advanced techniques for measuring these instructions in terms of neural activity, there is a paucity of techniques for quantifying the behavior itself in freely moving animals. This inability to measure the key output of the brain limits our understanding of the nervous system and how it changes in disease.

A new study by researchers at Duke University and Harvard University introduces an automated tool that can readily capture behavior of freely behaving animals and precisely reconstruct their three dimensional (3D) pose from a single video camera and without markers.

The April 19 study in Nature Methods led by Timothy W. Dunn, Assistant Professor, Duke University, and Jesse D. Marshall, postdoctoral researcher, Harvard University, describes a new 3D deep-neural network, DANNCE (3-Dimensional Aligned Neural Network for Computational Ethology). The study follows the team's 2020 study in Neuron which revealed the groundbreaking behavioral monitoring system, CAPTURE (Continuous Appendicular and Postural Tracking using Retroreflector Embedding), which uses motion capture and deep learning to continuously track the 3D movements of freely behaving animals. CAPTURE yielded an unprecedented detailed description of how animals behave. However, it required using specialized hardware and attaching markers to animals, making it a challenge to use.

"With DANNCE we relieve this requirement," said Dunn. "DANNCE can learn to track body parts even when they can't be seen, and this increases the types of environments in which the technique can be used. We need this invariance and flexibility to measure movements in naturalistic environments more likely to elicit the full and complex behavioral repertoire of these animals."

DANNCE works across a broad range of species and is reproducible across laboratories and environments, ensuring it will have a broad impact on animal—and even human—behavioral studies. It has a specialized neural network tailored to 3D pose tracking from video. A key aspect is that its 3D feature space is in physical units (meters) rather than camera pixels. This allows the tool to more readily generalize across different camera arrangements and laboratories. In contrast, previous approaches to 3D pose tracking used neural networks tailored to pose detection in two-dimensions (2D), which struggled to readily adapt to new 3D viewpoints.

https://techxplore.com/news/2021-04-3d-deep-neural-network-precisely.html

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/04/2021 14:46:49
Do Neural Networks Think Like Our Brain? OpenAI Answers!
https://openai.com/blog/multimodal-neurons/
Some of new AI models are getting closer to human intelligence. It's shown that they make similar types of mistakes in visual classifications. Previously, other AI models made mistakes that no human will ever make, which means that their working principles are significantly different. So it's clearly a progress which seems to make Ray Kurzweil's predictions about human level intelligence AI in 2029 more plausible.
Previously, other AI researchers predicted that Conquering Go would take 100 years. It's proven false by AlphaGo. That prediction was a product of linear thinking, which grossly deviates from real technological advancements that look more like exponential or even double exponential curve.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/04/2021 09:54:50
https://www.nextplatform.com/2021/04/22/vertically-unchallenged/
Quote
Components make compute and storage servers, and servers with application plane, control plane, and data plane software running atop them or alongside them make systems, and workflows across systems make platforms. The end state goal of any system architect is really creating a platform. If you don’t have an integrated platform, then what you have is an IT nightmare.

That is what four decades of distributed computing has really taught us, if you boil off all the pretty water that obscures with diffraction and bubbling and look very hard at the bottom of the pot into the substrate of bullshit left behind.

Maybe we should have something called a platform architect? And maybe they don’t have those titles at the big hyperscalers and public cloud builders, but that is, in fact, what these companies are doing. And for those of us who have been around for a while, it is with a certain amount of humor that we are seeing the rise of the most vertically integrated, proprietary platforms that the world has seen since the IBM System/360 mainframe and the DEC VAX, IBM AS/400, and HP 3000 – there was no “E” back then – minicomputers in the 1960s and the 1970s.
The vision of integrated system has been around for decades now. And it will improve further for decades to come.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/04/2021 22:40:32
Quote
We are starting to see more exascale and large supercomputing sites benchmark and project on deep learning capabilities of systems designed for HPC applications but only a few have run system-wide tests to see how their machines might stack up against standard CNN and other metrics.

In China, however, we finally have some results about the potential for leadership-class systems to tackle deep learning. That is interesting in itself, but in the case of AI benchmarks on the Tianhe-3 exascale prototype supercomputer, we also get a sense of how that system’s unique Arm-based architecture performs for math that is quite different than that required for HPC modeling/simulation.
Quote
It is hard to tell what to expect from this novel architecture in terms of AI workloads but for us, the news is that the system is operational and teams are at least exploring what might be possible in scaling deep learning using an Arm-based architecture and unique interconnect. It also shows that there is still work to be done to optimize Arm-based processors for even routine AI benchmarks to keep pace with other companies with CPUs and accelerators.
http://www.nextplatform.com/2021/04/19/chinas-exascale-prototype-supercomputer-tests-ai-workloads/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/04/2021 21:09:12
Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’
Quote
Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.

How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.

Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.

Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.

The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.

Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.

The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.

https://singularityhub.com/2021/04/26/the-next-supercomputer-a-blueprint-for-an-optoelectronic-brain/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/04/2021 07:33:10
https://www.nature.com/articles/d41586-021-00530-0
Robo-writers: the rise and risks of language-generating AI
A remarkable AI can write like humans — but with no understanding of what it’s saying.
Quote
In June 2020, a new and powerful artificial intelligence (AI) began dazzling technologists in Silicon Valley. Called GPT-3 and created by the research firm OpenAI in San Francisco, California, it was the latest and most powerful in a series of ‘large language models’: AIs that generate fluent streams of text after imbibing billions of words from books, articles and websites. GPT-3 had been trained on around 200 billion words, at an estimated cost of tens of millions of dollars.

The developers who were invited to try out GPT-3 were astonished. “I have to say I’m blown away,” wrote Arram Sabeti, founder of a technology start-up who is based in Silicon Valley. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”

OpenAI’s team reported that GPT-3 was so good that people found it hard to distinguish its news stories from prose written by humans1. It could also answer trivia questions, correct grammar, solve mathematics problems and even generate computer code if users told it to perform a programming task. Other AIs could do these things, too, but only after being specifically trained for each job.

Large language models are already business propositions. Google uses them to improve its search results and language translation; Facebook, Microsoft and Nvidia are among other tech firms that make them. OpenAI keeps GPT-3’s code secret and offers access to it as a commercial service. (OpenAI is legally a non-profit company, but in 2019 it created a for-profit subentity called OpenAI LP and partnered with Microsoft, which invested a reported US$1 billion in the firm.) Developers are now testing GPT-3’s ability to summarize legal documents, suggest answers to customer-service enquiries, propose computer code, run text-based role-playing games or even identify at-risk individuals in a peer-support community by labelling posts as cries for help.

(https://media.nature.com/lw800/magazine-assets/d41586-021-00530-0/d41586-021-00530-0_18907396.png)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/04/2021 14:23:58
https://www.nature.com/articles/s41586-021-03451-0
Quote
Towards complete and error-free genome assemblies of all vertebrate species

High-quality and complete reference genome assemblies are fundamental for the application of genomics to biology, disease, and biodiversity conservation. However, such assemblies are available for only a few non-microbial species1,2,3,4. To address this issue, the international Genome 10K (G10K) consortium5,6 has worked over a five-year period to evaluate and develop cost-effective methods for assembling highly accurate and nearly complete reference genomes. Here we present lessons learned from generating assemblies for 16 species that represent six major vertebrate lineages. We confirm that long-read sequencing technologies are essential for maximizing genome quality, and that unresolved complex repeats and haplotype heterozygosity are major sources of assembly error when not handled correctly. Our assemblies correct substantial errors, add missing sequence in some of the best historical reference genomes, and reveal biological discoveries. These include the identification of many false gene duplications, increases in gene sizes, chromosome rearrangements that are specific to lineages, a repeated independent chromosome breakpoint in bat genomes, and a canonical GC-rich pattern in protein-coding genes and their regulatory regions. Adopting these lessons, we have embarked on the Vertebrate Genomes Project (VGP), an international effort to generate high-quality, complete reference genomes for all of the roughly 70,000 extant vertebrate species and to help to enable a new era of discovery across the life sciences.
Quote
The Vertebrate Genomes Project
Building on this initial set of assembled genomes and the lessons learned, we propose to expand the VGP to deeper taxonomic phases, beginning with phase 1: representatives of approximately 260 vertebrate orders, defined here as lineages separated by 50 million or more years of divergence from each other. Phase 2 will encompass species that represent all approximately 1,000 vertebrate families; phase 3, all roughly 10,000 genera; and phase 4, nearly all 71,657 extant named vertebrate species (Supplementary Note 5, Supplementary Fig. 3). To accomplish such a project within 10 years, we will need to scale up to completing 125 genomes per week, without sacrificing quality. This includes sample permitting, high molecular weight DNA extractions, sequencing, meta-data tracking, and computational infrastructure. We will take advantage of continuing improvements in genome sequencing technology, assembly, and annotation, including advances in PacBio HiFi reads, Oxford Nanopore reads, and replacements for 10XG reads (Supplementary Note 6), while addressing specific scientific questions at increasing levels of phylogenetic refinement. Genomic technology advances quickly, but we believe the principles of our pipeline and the lessons learned will be applicable to future efforts. Areas in which improvement is needed include more accurate and complete haplotype phasing, base-call accuracy, and resolution of long repetitive regions such as telomeres, centromeres, and sex chromosomes. The VGP is working towards these goals and making all data, protocols, and pipelines openly available (Supplementary Notes 5, 7).

Despite remaining imperfections, our reference genomes are the most complete and highest quality to date for each species sequenced, to our knowledge. When we began to generate genomes beyond the Anna’s hummingbird in 2017, only eight vertebrate species in GenBank had genomes that met our target continuity metrics, and none were haplotype phased (Supplementary Table 23). The VGP pipeline introduced here has now been used to complete assemblies of more than 130 species of similar or higher quality (Supplementary Note 5; BioProject PRJNA489243). We encourage the scientific community to use and evaluate the assemblies and associated raw data, and to provide feedback towards improving all processes for complete and error-free assembled genomes of all species.
It seems like in the future we don't need zoos filled with captivated animals just to preserve biodiversity.  However, genetic information alone is not enough to reproduce fully functional organisms. Compatible epigenetic environments are necessary. An embryo of tiger inside a chicken egg is unlikely to grow into baby tiger.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/04/2021 23:44:53
The least a human individual can contribute to the society without doing anything is to provide backup of genetic and epigenetic information, which also gives biodiversity. This contribution is insignificant when there are billions of people, but would be important when there are only few left.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/05/2021 07:50:42
Elon Musk (@elonmusk) tweeted at 5:45 AM on Fri, Apr 30, 2021:
Quote
A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers
But it may not be the case anymore in the future. At least there are 2 things can change that.
When most vehicles are already autonomous.
When VTOL flying cars become abundant, which would make roads irrelevant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/05/2021 05:33:00
I'd share a recent newsfeed from my e-mail. It seems similar to how brains have evolved.

MorphNet is a Google Model to Build Faster and Smaller Neural Networks
The model makes inroads in the optimization of the architecture of neural networks.

Quote
Designing deep neural networks these days is more art than science. In the deep learning space, any given problem can be addressed with a fairly large number of neural network architectures. In that sense, designing a deep neural network from the ground up for a given problem can result incredibly expensive in terms of time and computational resources. Additionally, given the lack of guidance in the space, we often end up producing neural network architectures that are suboptimal for the task at hand. About two years ago, artificial intelligence(AI) researchers from Google published a paper proposing a method called MorphNet to optimize the design of deep neural networks.
Quote
Automated neural network design is one of the most active areas of research in the deep learning space. The most traditional approach to neural network architecture design involves sparse regularizers using methods such as L1. While this technique has proven effective on reducing the number of connections in a neural network, quite often ends up producing suboptimal architectures. Another approach involves using search techniques to find an optimal neural network architecture for a given problem. That method has been able to generate highly optimized neural network architectures but it requires an exorbitant number of trial and error attempts which often results computationally prohibited. As a result, neural network architecture search has only proven effective in very specialized scenarios. Factoring the limitations of the previous methods, we can arrive to three key characteristics of effective automated neural network design techniques:
a) Scalability: The automated design approach should be scalable to large datasets and models.
b) Multi-Factor Optimization: An automated method should be able to optimized the structure of a deep neural network targeting specific resources.
c) Optimal: An automated neural network design should produce an architecture that improves performance while reducing the usage of the target resource.

Quote
MorphNet
Google’s MorphNet approaches the problem of automated neural network architecture design from a slightly different angle. Instead of trying to try numerous architectures across a large design space, MorphNet start with an existing architecture for a similar problem and, in one shot, optimize it for the task at hand.
MorphNet optimizes a deep neural network by interactively shrinking and expanding its structure. In the shrinking phase, MorphNet identifies inefficient neurons and prunes them from the network by applying a sparsifying regularizer such that the total loss function of the network includes a cost for each neuron. Just doing this typically results on a neural network that consumes less of the targeted resource, but typically achieves a lower performance. However, MorphNet applies a specific shrinking model that not only highlights which layers of a neural network are over-parameterized, but also which layers are bottlenecked. Instead of applying a uniform cost per neuron, MorphNet calculates a neuron cost with respect to the targeted resource. As training progresses, the optimizer is aware of the resource cost when calculating gradients, and thus learns which neurons are resource-efficient and which can be removed.
https://medium.com/@jrodthoughts/morphnet-is-a-google-model-to-build-faster-and-smaller-neural-networks-f890276da456
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/05/2021 09:49:59
DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
Quote
Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features without any labels. Facebook AI's new system, DINO, combines advances in Self-Supervised Learning for Computer Vision with the new Vision Transformer (ViT) architecture and achieves impressive results without any labels. Attention maps can be directly interpreted as segmentation maps, and the obtained representations can be used for image retrieval and zero-shot k-nearest neighbor classifiers (KNNs).

OUTLINE:
0:00​ - Intro & Overview
6:20​ - Vision Transformers
9:20​ - Self-Supervised Learning for Images
13:30​ - Self-Distillation
15:20​ - Building the teacher from the student by moving average
16:45​ - DINO Pseudocode
23:10​ - Why Cross-Entropy Loss?
28:20​ - Experimental Results
33:40​ - My Hypothesis why this works
38:45​ - Conclusion & Comments
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/05/2021 13:08:47
https://theconversation.com/engineered-viruses-can-fight-the-rise-of-antibiotic-resistant-bacteria-154337
Quote
As the world fights the SARS-CoV-2 virus causing the COVID-19 pandemic, another group of dangerous pathogens looms in the background. The threat of antibiotic-resistant bacteria has been growing for years and appears to be getting worse. If COVID-19 taught us one thing, it’s that governments should be prepared for more global public health crises, and that includes finding new ways to combat rogue bacteria that are becoming resistant to commonly used drugs.

In contrast to the current pandemic, viruses may be be the heroes of the next epidemic rather than the villains. Scientists have shown that viruses could be great weapons against bacteria that are resistant to antibiotics.
Quote
Since the discovery of penicillin in 1928, antibiotics have changed modern medicine. These small molecules fight off bacterial infections by killing or inhibiting the growth of bacteria. The mid-20th century was called the Golden Age for antibiotics, a time when scientists were discovering dozens of new molecules for many diseases.

This high was soon followed by a devastating low. Researchers saw that many bacteria were evolving resistance to antibiotics. Bacteria in our bodies were learning to evade medicine by evolving and mutating to the point that antibiotics no longer worked.

As an alternative to antibiotics, some researchers are turning to a natural enemy of bacteria: bacteriophages. Bacteriophages are viruses that infect bacteria. They outnumber bacteria 10 to 1 and are considered the most abundant organisms on the planet.

Bacteriophages, also known as phages, survive by infecting bacteria, replicating and bursting out from their host, which destroys the bacterium.

Harnessing the power of phages to fight bacteria isn’t a new idea. In fact, the first recorded use of so-called phage therapy was over a century ago. In 1919, French microbiologist Félix d'Hérelle used a cocktail of phages to treat children suffering from severe dysentery.

D'Hérelle’s actions weren’t an accident. In fact, he is credited with co-discovering phages, and he pioneered the idea of using bacteria’s natural enemies in medicine. He would go on to stop cholera outbreaks in India and plague in Egypt.

Phage therapy is not a standard treatment you can find in your local hospital today. But excitement about phages has grown over the past few years. In particular, scientists are using new knowledge about the complex relationship between phages and bacteria to improve phage therapy. By engineering phages to better target and destroy bacteria, scientists hope to overcome antibiotic resistance.
Quote
Now scientists are hoping to use the knowledge about CRISPR systems to engineer phages to destroy dangerous bacteria.

When the engineered phage locates specific bacteria, the phage injects CRISPR proteins inside the bacteria, cutting and destroying the microbes’ DNA. Scientists have found a way to turn defense into offense. The proteins normally involved in protecting against viruses are repurposed to target and destroy the bacteria’s own DNA. The scientists can specifically target the DNA that makes the bacteria resistant to antibiotics, making this type of phage therapy extremely effective.
Quote
Science is only half of the solution when it comes to fighting these microbes. Commercialization and regulation are important to ensure that this technology is in society’s toolkit for fending off a worldwide spread of antibiotic-resistant bacteria.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/05/2021 14:58:08
https://neurosciencenews.com/3d-neuroimaging-18378/
New Imaging Technique Captures How Brain Moves in Stunning Detail
Quote
Summary: A new neuroimaging technique captures the brain in motion in real-time, generating a 3D view and with improved detail. The new technology could help clinicians to spot hard-to-detect neurological conditions.

Source: Stevens Institute of Technology
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/05/2021 05:53:14
How Close Are We to Harnessing Synthetic Life?

Quote
Scientists are exploring how to edit genomes and even create brand new ones that never existed before, but how close are we to harnessing synthetic life?

Scientists have made major strides when it comes to understanding the base code that underlies all living things—but what if we could program living cells like software?

The principle behind synthetic biology, the emerging study of building living systems, lies in this ability to synthesize life. An ability to create animal products, individualized medical therapies, and even transplantable organs, all starting with synthetic DNA and cells in a lab.

There are two main schools of thought when it comes to synthesizing life: building artificial cells from the bottom-up or engineering microorganisms so significantly that it resynthesizes and redesigns the genome.

With genetic engineering tools becoming more and more accessible, researchers want to use these synthesized genomes to enhance human health with regards to things like detecting infections or environmental pollutants. Bacterial cells can be engineered that will detect toxic chemicals.

And these synthesized bacteria could potentially protect us from, for example, consuming toxins in contaminated water.

The world of synthetic biology goes beyond human health though, it can be used in a variety of industries, including fashion. Researchers hope to come up with lab-made versions of materials like leather or silk.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/05/2021 05:58:24
It's Alive, But Is It Life: Synthetic Biology and the Future of Creation
Quote
For decades, biologists have read and edited DNA, the code of life. Revolutionary developments are giving scientists the power to write it. Instead of tinkering with existing life forms, synthetic biologists may be on the verge of writing the DNA of a living organism from scratch. In the next decade, according to some, we may even see the first synthetic human genome. Join a distinguished group of synthetic biologists, geneticists and bioengineers who are edging closer to breathing life into matter.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Original Program Date: June 4, 2016
MODERATOR: Robert Krulwich
PARTICIPANTS: George Church, Drew Endy, Tom Knight, Pamela Silver
Quote
Synthetic Biology and the Future of Creation 00:00​

Participant Intros 3:25​

Ordering DNA from the internet 8:10​
 
How much does it cost to make a synthetic human? 13:04​

Why is yeast the best catalyst 20:10​

How George Church printed 90 billion copies of his book 26:05​

Creating synthetic rose oil 28:35​

Safety engineering and synthetic biology 37:15​

Do we want to be invaded by bad bacteria? 45:26​

Do you need a human gene's to create human cells? 55:09​

The standard of DNA sequencing in utero 1:02:27​

The science community is divided by closed press meetings 1:11:30​

The Human Genome Project. What is it? 1:21:45​
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/05/2021 05:39:05
DeepMind Wants to Reimagine One of the Most Important Algorithms in Machine Learning.

In one of the most important papers this year, DeepMind proposed a multi-agent structure to redefine PCA.
Quote
Principal component analysis(PCA) is one of the key algorithms that are part of any machine learning curriculum. Initially created in the early 1900s, PCA is a fundamental algorithm to understand data in high-dimensional spaces which are common in deep learning problems. More than a century after its invention, PCA is such a key part of modern deep learning frameworks that very few question it there could be a better approach. Just a few days ago, DeepMind published a fascinating paper that looks to redefine PCA as a competitive multi-agent game called EigenGame.

Titled “EigenGame: PCA as a Nash Equilibrium”, the DeepMind work is one of those papers that you can’t resist to read just based on the title. Redefining PCA sounds ludicrous. And yet, DeepMind’s thesis makes perfect sense the minute you deep dive into it.

In recent years, PCA techniques have hit a bottleneck in large scale deep learning scenarios. Originally designed for mechanical devices, traditional PCA is formulated as an optimization problem which is hard to scale across large computational clusters. A multi-agent approach to PCA might be able to leverage vast computational resources and produce better optimizations in modern dep learning problems.
https://medium.com/@jrodthoughts/deepmind-wants-to-reimagine-one-of-the-most-important-algorithms-in-machine-learning-381884d42de
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/05/2021 12:55:16
Quote
The advent of Transformers in 2017 completely changed the world of neural networks. Ever since, the core concept of Transformers has been remixed, repackaged, and rebundled in several models. The results have surpassed the state of the art in several machine learning benchmarks. In fact, currently all top benchmarks in the field of natural language processing are dominated by Transformer-based models. Some of the Transformer-family models are BERT, ALBERT, and the GPT series of models.

In any machine learning model, the most important components of the training process are:
The code of the model — the components of the model and its configuration
The data to be used for training
The available compute power
With the Transformer family of models, researchers finally arrived at a way to increase the performance of a model infinitely: You just increase the amount of training data and compute power.

This is exactly what OpenAI did, first with GPT-2 and then with GPT-3. Being a well funded ($1 billion+) company, it could afford to train some of the biggest models in the world. A private corpus of 500 billion tokens was used for training the model, and approximately $50 million was spent in compute costs.

While the code for most of the GPT language models is open source, the model is impossible to replicate without the massive amounts of data and compute power. And OpenAI has chosen to withhold public access to its trained models, making them available via API to only a select few companies and individuals. Further, its access policy is undocumented, arbitrary, and opaque.

https://venturebeat.com/2021/05/15/gpt-3s-free-alternative-gpt-neo-is-something-to-be-excited-about/
Quote
The bottom line here is: GPT-Neo is a great open source alternative to GPT-3, especially given OpenAI’s closed access policy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/05/2021 12:03:57
https://bdtechtalks.com/2021/05/17/ibms-codenet-machine-learning-programming/
IBM’s Project CodeNet will test how far you can push AI to write software
Quote
IBM’s AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks. Called Project CodeNet, the dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning.

While there’s a scant chance that machine learning models built on the CodeNet dataset will make human programmers redundant, there’s reason to be hopeful that they will make developers more productive.
Quote
With Project CodeNet, the researchers at IBM have tried to create a multi-purpose dataset that can be used to train machine learning models for various tasks. CodeNet’s creators describe it as a “very large scale, diverse, and high-quality dataset to accelerate the algorithmic advances in AI for Code.”

The dataset contains 14 million code samples with 500 million lines of code written in 55 different programming languages. The code samples have been obtained from submissions to nearly 4,000 challenges posted on online coding platforms AIZU and AtCoder. The code samples include both correct and incorrect answers to the challenges.

One of the key features of CodeNet is the amount of annotation that has been added to the examples. Every one of the coding challenges included in the dataset has a textual description along with CPU time and memory limits. Every code submission has a dozen pieces of information, including the language, the date of submission, size, execution time, acceptance, and error types.

The researchers at IBM have also gone through great effort to make sure the dataset is balanced along different dimensions, including programming language, acceptance, and error types.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/05/2021 07:27:49
https://bdtechtalks.com/2021/05/13/machine-learning-dimensionality-reduction/
Machine learning: What is dimensionality reduction?
Quote
Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

Measuring a general intelligence and general consciousness are examples of Dimensionality reduction used to reduce multiple parameters into a single number.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:37:15
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries | WIRED Live
Quote
Deepmind, Co-founder and CEO, Demis Hassabis discusses how we can avoid bias being built into AI systems and what's next for DeepMind, including the future of protein folding, at WIRED Live 2020.

"If we build it right, AI systems could be less biased than we are."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:42:53
https://www.newscientist.com/article/2268496-people-can-answer-questions-about-their-dreams-without-waking-up
Quote
Talking to people while they are asleep can influence their dreams – and in some cases, the dreamer can respond without waking up.

Ken Paller at Northwestern University in Evanston, Illinois, and his colleagues found that people could answer questions and even solve maths problems while lucid dreaming – a state that typically occurs during rapid eye-movement (REM) sleep when the dreamer is aware of being in a dream, and is sometimes able to control it.

“We asked questions where we knew the answer because what we wanted to do is determine whether we were having good communication. We had to know if they were answering correctly,” says Paller.

The team asked dreamers yes-no questions relating to their backgrounds and experiences, along with simple maths problems involving addition and subtraction. The dreamers weren’t aware of what questions they would be asked before they went to sleep.

The dreamers, who had a range of experience with lucid dreaming, answered the questions correctly 29 times, incorrectly five times, and ambiguously 28 times by twitching their face muscles or moving their eyes. They didn’t respond on 96 occasions.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:45:15
Inside Google’s DeepMind Project: How AI Is Learning On Its Own
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:53:56
The AI Hardware Problem
Quote
The millennia-old idea of expressing signals and data as a series of discrete states had ignited a revolution in the semiconductor industry during the second half of the 20th century. This new information age thrived on the robust and rapidly evolving field of digital electronics. The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures.

THE MAC
In a digital neural network implementation, the weights and input data are stored in system memory and must be fetched and stored continuously through the sea of multiple-accumulate operations within the network. This approach results in most of the power being dissipated in fetching and storing model parameters and input data to the arithmetic logic unit of the CPU, where the actual multiply-accumulate operation takes place. A typical multiply-accumulate operation within a general-purpose CPU consumes more than two orders of magnitude greater than the computation itself.

GPUs
Their ability to processes 3D graphics requires a larger number of arithmetic logic units coupled to high-speed memory interfaces. This characteristic inherently made them far more efficient and faster for machine learning by allowing hundreds of multiple-accumulate operations to process simultaneously. GPUs tend to utilize floating-point arithmetic, using 32 bits to represent a number by its mantissa, exponent, and sign. Because of this, GPU targeted machine learning applications have been forced to use floating-point numbers.

ASICS
These dedicated AI chips are offer dramatically larger amounts of data movement per joule when compared to GPUs and general-purpose CPUs. This came as a result of the discovery that with certain types of neural networks, the dramatic reduction in computational precision only reduced network accuracy by a small amount. It will soon become infeasible to increase the number of multiply-accumulate units integrated onto a chip, or reduce bit- precision further.

LOW POWER AI

Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power.

Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math.

These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network.

As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks.

If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 07:23:41
Microsoft's ZeRO-Infinity Library Trains 32 Trillion Parameter AI Model
https://www.infoq.com/news/2021/05/microsoft-zero-infinity/
Quote
Microsoft recently announced ZeRO-Infinity, an addition to their open-source DeepSpeed AI training library that optimizes memory use for training very large deep-learning models. Using ZeRO-Infinity, Microsoft trained a model with 32 trillion parameters on a cluster of 32 GPUs, and demonstrated fine-tuning of a 1 trillion parameter model on a single GPU.

The DeepSpeed team described the new features in a recent blog post. ZeRO-Infinity is the latest iteration of the Zero Redundancy Optimizer (ZeRO) family of memory optimization techniques. ZeRO-Infinity introduces several new strategies for addressing memory and bandwidth constraints when training large deep-learning models, including: a new offload engine for exploiting CPU and Non-Volatile Memory express (NVMe) memory, memory-centric tiling to handle large operators without model-parallelism, bandwidth-centric partitioning for reducing bandwidth costs, and an overlap-centric design for scheduling data communication. According to the DeepSpeed team:
Quote
The improved ZeRO-Infinity offers the system capability to go beyond the GPU memory wall and train models with tens of trillions of parameters, an order of magnitude bigger than state-of-the-art systems can support. It also offers a promising path toward training 100-trillion-parameter models.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 11:58:59
https://www.cnbc.com/2021/05/27/europeans-want-to-replace-lawmakers-with-ai.html
More than half of Europeans want to replace lawmakers with AI, study says

Quote
Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI’s clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

LONDON — A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

IMO, politicians are more likely to sacrifice the best interests of their constituents to get their own best interest. While AI's decisions would depend on the terminal goal assigned to it, and the data fed into it. It makes alignment with the universal terminal goal a critical step in building an AI with such a huge power and responsibility.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 22:26:52
https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-19

Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs

Quote
Transformer architectures have come to dominate the natural language processing (NLP) field since their 2017 introduction. One of the only limitations to transformer application is the huge computational overhead of its key component — a self-attention mechanism that scales with quadratic complexity with regard to sequence length.

New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs.

Transformers’ self-attention mechanism enables inputs to be represented with higher-order units to flexibly capture diverse syntactic and semantic relationships in natural language. Researchers have long regarded the associated high complexity and memory footprint as an unavoidable trade-off on transformers’ impressive performance. But in the paper FNet: Mixing Tokens with Fourier Transforms, the Google team challenges this thinking with FNet, a novel model that strikes an excellent balance between speed, memory footprint and accuracy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/06/2021 12:21:21

https://bdtechtalks.com/2021/05/27/artificial-intelligence-neurons-assemblies/

A simple model of the brain provides new directions for AI research
Quote
Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.
Quote
The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.

A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “grandmother cell” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.

These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.”

Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.”
(https://i2.wp.com/bdtechtalks.com/wp-content/uploads/2021/05/brain-assemblies.jpg)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/06/2021 23:26:03
I bring the discussion here from my main thread to explore further into the detail.
Trial and error would be much cheaper, hence more efficient, if we could do it in a virtual environment, like computer simulation, if we can get it to be adequately accurate and precise in representing objective reality.

Adequately accurate and precise virtual representation of objective reality is what we commonly called knowledge. It's a form of data compression.
At the most fundamental level, knowledge consist of two types of data: nodes and edges. They are the data points and the relationship among them, respectively.

In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
In practice, we may think that we can make a statement precisely without leaving any uncertainty using finite bits of information. For example, x-x=0, and x/x=1, with seemingly 0 uncertainty.
On the other hand, to write ratio between circumference and diameter of a circle in decimal number accurately without uncertainty, infinite digits are required. What makes the difference here?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 02:49:39
https://en.wikipedia.org/wiki/Single_source_of_truth
Quote
In information systems design and theory, single source of truth (SSOT) is the practice of structuring information models and associated data schema such that every data element is mastered (or edited) in only one place. Any possible linkages to this data element (possibly in other areas of the relational schema or even in distant federated databases) are by reference only. Because all other locations of the data just refer back to the primary "source of truth" location, updates to the data element in the primary location propagate to the entire system without the possibility of a duplicate value somewhere being forgotten.

Deployment of an SSOT architecture is becoming increasingly important in enterprise settings where incorrectly linked duplicate or de-normalized data elements (a direct consequence of intentional or unintentional denormalization of any explicit data model) pose a risk for retrieval of outdated, and therefore incorrect, information. A common example would be the electronic health record, where it is imperative to accurately validate patient identity against a single referential repository, which serves as the SSOT. Duplicate representations of data within the enterprise would be implemented by the use of pointers rather than duplicate database tables, rows, or cells. This ensures that data updates to elements in the authoritative location are comprehensively distributed to all federated database constituencies in the larger overall enterprise architecture.
SSOT systems provide data that are authentic, relevant, and referable.

https://www.talend.com/resources/single-source-truth/
Quote
What is a single source of truth (SSOT)?
Single source of truth (SSOT) is a concept used to ensure that everyone in an organization bases business decisions on the same data. Creating a single source of truth is straightforward. To put an SSOT in place, an organization must provide relevant personnel with one source that stores the data points they need.

Data-driven decision making has placed never-before-seen levels of importance on collecting and analyzing data. While acting on data-derived business intelligence is essential for competitive brands today, companies often spend far too much time debating which numbers, invariably from different sources, are the right numbers to use. Metrics from social platforms may paint one picture of a company’s target demographics while vendor feedback or online questionnaires may say something entirely different. How are corporate leaders to decide whose data points to use in such a scenario?

Establishing a single source of truth eliminates this issue. Instead of debating which of many competing data sources should be used for making company decisions, everyone can use the same, unified source for all their data needs It provides data that can be used by anyone, in any way, across the entire organization.
Currently, effort to establish a single source of truth are becoming common in business organizations, as well as governments. But they are still limited for their internal usage, and seemingly independent from each other, although they share the same objective reality. When there are discrepancies, we would feel like there were alternative truth.
A common example I often see is  road closures by government which are not accurately represented in Google Maps.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 11:58:31
In practice, we may think that we can make a statement precisely without leaving any uncertainty using finite bits of information. For example, x-x=0, and x/x=1, with seemingly 0 uncertainty.
On the other hand, to write ratio between circumference and diameter of a circle in decimal number accurately without uncertainty, infinite digits are required. What makes the difference here?
Here is another example. We can say that the smallest prime number is 2, without leaving any uncertainty.
Square root of -1 is i
Speed of light through vacuum is 299792458 metres per second
We can also say that the ratio between circumference and diameter of a circle is π, with no uncertainty.
If someone says that a value equals e, we need more information as a context, whether it refers to Euler's number, or charge of electrons, or something else.
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 16:34:24
Calculating pi efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/06/2021 06:29:45
Quote
Sporting 1.75 trillion parameters, Wu Dao 2.0 is roughly ten times the size of Open AI's GPT-3.
https://www.engadget.com/chinas-gigantic-multi-modal-ai-is-no-one-trick-pony-211414388.html

Quote
When Open AI's GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

First off, Wu Dao is flat out enormous. It's been trained on 1.75 trillion parameters (essentially, the model's self-selected coefficients) which is a full ten times larger than the 175 billion GPT-3 was trained on and 150 billion parameters larger than Google's Switch Transformers.

With all that computing power comes a whole bunch of capabilities. Unlike most deep learning models which perform a single task — write copy, generate deep fakes, recognize faces, win at Go — Wu Dao is multi-modal, similar in theory to Facebook's anti-hatespeech AI or Google's recently released MUM. BAAI researchers demonstrated Wu Dao's abilities to perform natural language processing, text generation, image recognition, and image generation tasks during the lab's annual conference on Tuesday. The model can not only write essays, poems and couplets in traditional Chinese, it can both generate alt text based off of a static image and generate nearly photorealistic images based on natural language descriptions. Wu Dao also showed off its ability to power virtual idols (with a little help from Microsoft-spinoff XiaoIce) and predict the 3D structures of proteins like AlphaFold.

“The way to artificial general intelligence is big models and big computer,” Dr. Zhang Hongjiang, chairman of BAAI, said during the conference Tuesday. “What we are building is a power plant for the future of AI, with mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”
The article shows how close we are from building AGI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/06/2021 10:12:53
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Here is an example in our daily life. If I say to someone face to face that I found his ID card and I'm keeping it in the pocket of shirt that I'm wearing, he can quickly find it. But if I speak to someone over the phone, it won't be clear for him until he knows my location. The location can be stated as the name of the building, the address, or geographic coordinate as latitude and longitude. If I'm inside a tall building, the vertical position such as floor number or altitude is also necessary.
If I speak to an alien in another solar system, I would need to inform the position of planet earth and the sun. If the alien is from another galaxy, then I need to inform the position of the milky way too.
If I tell you that X=2Y, you get no new information until you can relate it to your preexisting knowledge.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 00:05:12
A common example I often see is  road closures by government which are not accurately represented in Google Maps.
Yesterday I came to a wedding party. The invitation contains a QR-code showing the location which can be traced in Google Maps. Due to traffic jam, it recommended to take an alternative route. I didn't expect that it brought us to cross a flooded road.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 00:17:37
Here is another picture from the front.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 07:13:29
Interpretable deep learning uncovers cellular properties in label-free live cell images that are predictive of highly metastatic melanoma.
Quote
Summary
Deep learning has emerged as the technique of choice for identifying hidden patterns in cell imaging data but is often criticized as “black box.” Here, we employ a generative neural network in combination with supervised machine learning to classify patient-derived melanoma xenografts as “efficient” or “inefficient” metastatic, validate predictions regarding melanoma cell lines with unknown metastatic efficiency in mouse xenografts, and use the network to generate in silico cell images that amplify the critical predictive cell properties. These exaggerated images unveiled pseudopodial extensions and increased light scattering as hallmark properties of metastatic cells. We validated this interpretation using live cells spontaneously transitioning between states indicative of low and high metastatic efficiency. This study illustrates how the application of artificial intelligence can support the identification of cellular properties that are predictive of complex phenotypes and integrated cell functions but are too subtle to be identified in the raw imagery by a human expert. A record of this paper’s transparent peer review process is included in the supplemental information.
https://www.sciencedirect.com/science/article/pii/S2405471221001587
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 07:16:02
https://scitechdaily.com/whos-to-die-and-whos-to-live-mechanical-cue-is-at-the-origin-of-cell-death-decision/

Quote
In past studies, researchers have found that C. elegans gonads generate more germ cells than needed and that only half of them grow to become oocytes, while the rest shrinks and die by physiological apoptosis, a programmed cell death that occurs in multicellular organisms. Now, scientists from the Biotechnology Center of the TU Dresden (BIOTEC), the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), the Cluster of Excellence Physics of Life (PoL) at the TU Dresden, the Max Planck Institute for the Physics of Complex Systems (MPI-PKS), the Flatiron Institute, NY, and the University of California, Berkeley, found evidence to answer the question of what triggers this cell fate decision between life and death in the germline.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/06/2021 12:24:47
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Here is another example. Someone gives us a message, 11001010.
There are many ways to interpret this. It could be a decimal number, or other base number such as hexadecimal or binary. Even in binary, we can treat it as signed or unsigned. Some of the bits can be a start bit, stop bit, or parity bit.
It could be treated as binary coded decimal.
It could also be a Morse code.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/06/2021 14:21:42
Due to traffic jam, it recommended to take an alternative route.
A common way to reduce traffic jam is by applying odd-even rule. On odd dates, only vehicles with odd plate number are allowed to pass, and vice versa. Assuming that the plate numbers are generally  assigned consecutively, the least significant bit suddenly becomes the most important bit.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/06/2021 17:10:28
In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
The number of bit specifies the quantity of information. Its conformity with objective reality as the ground truth specifies the quality of the information. Those concepts are similar to precision and accuracy, respectively.
Previously, I've created a thread specifically discussing about accuracy and precision from a practical perspective. I tried to quantify the data quality and quantity to be used in a database system that virtualize plant operations to make them more manageable. I wanted to use the most general forms as possible so they can be used flexibly for wide range of applications. Perhaps my approach was considered unconventional that it should be put in new theory section.
In measurement problems, our results are compared to a unit of measurement, and expressed in a number. The value may be accompanied by tolerance or quantization of uncertainty, due to the measurement methods or some unpredictable external factors. We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 09/06/2021 21:17:38

Hello Yusuf!
🙏

I am quite interested & enthusiastic about this particular Subject.
👍
But surely Not as much as You are.

Just wanted to say, this OP is quite a Good Read for anyone who's interested on the Topic.
👌


P.S. - Rather than googling for similar articles, I'd just visit in here n read it back to back.
👍
You ' Quote ' information, also provide Official Links for further details & post Images too.
😇
Very Nice & Good Work!
✌️
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 00:19:11
Hi Zer0. Thank you for your kind words. I really appreciate it. It gives me a positive feedback that I am going to the right direction.
I also appreciate some negative feedbacks to let me know if I made mistakes or misunderstand some concepts. They could help me avoid further mistakes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 11:09:03
Autonomous flying vehicles in 'smart cities' - NASA working on infrastructure


Quote
Data and Reasoning Fabric (DRF) could one day "assemble and provide useful information to autonomous vehicles in real time. The information system is being developed by NASA.

Credit: NASA
Here is the latest development of shared virtual universe among autonomous vehicles. It's a step closer toward a unified virtual universe that is the idea behind this thread, although it's usage is still limited to autonomous vehicles only. The next step would be integration between this system with other virtualization systems already established, such as governments and corporations.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 22:40:39
https://venturebeat.com/2021/06/09/deepmind-says-reinforcement-learning-is-enough-to-reach-general-ai/

Quote
In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

...

In the race of developing AI, besides of hardware capacity, the results depend on the choosing of reward function. It's like choosing the instrumental goals which are aligned with the terminal goals. The natural long term reward is survival. Nature also provides short term reward function through pleasure and pain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/06/2021 08:08:45
"Why Is Quantum Computing So Hard to Explain | Quanta Magazine" https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/
Quote
Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.

In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.

Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
Quote
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 07:17:19
There's Plenty Moore Room: IBM's New 2nm CPU
Quote
People talk about the death of semiconductors being able to shrink. IBM is laughing in your face - there's plenty of room, and plenty of density, and they've developed a proof of concept to showcase where the technology can go. Here's a look at IBM's new 2nm silicon.

Intro

0:00 The Future in 2024
0:26 What Nanometers Really Mean
3:05 Transistor Density
4:02 IBM on 2nm
5:38 Comparing against current nodes
7:00 What's on the chip
7:40 Gate-All-Around Nanosheets
8:45 Albany, NY
9:16 Performance of 2nm
9:42 Coming to Market and Pathfinding
11:06 EUV and Future of EUV (Jim Keller)
14:12 Minimum Specification: Bite a Wafer
14:39 Cat Tax
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 16:18:10
We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.



The smallest base number for numerical writings is 2. That's why most computer are binary system. For human machine interface such as programming languages, some extension of binary code are often useful, such as octal, hexadecimal, or BCD.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 22:06:06
Quote
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

"Cybersecurity experts face a new challenge: AI capable of tricking them" https://www.inputmag.com/culture/cybersecurity-experts-face-a-new-challenge-ai-capable-of-tricking-them/amp

Quote
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 23:26:49
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
This brings us back to the question about accuracy and precision of our information sources. Here are definitions of precision by the dictionary.
Quote
the quality, condition, or fact of being exact and accurate.
"the deal was planned and executed with military precision"

TECHNICAL
refinement in a measurement, calculation, or specification, especially as represented by the number of digits given.
"a precision of six decimal figures"
And here are the definitions of accuracy.
Quote
the quality or state of being correct or precise.
"we have confidence in the accuracy of the statistics"

TECHNICAL
the degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard.
"the accuracy of radiocarbon dating"

We can see here that in general definition, the meanings of precision and accuracy are mixed. While in technical definition, it's restricted to numeric writing, especially in decimal based number. We can quickly realize that those definitions can't cover all kinds of usage of the word.

In technical usage, non-number information can't be described. For example, Alice is going to Japan. It would be more precise if it's said that she's going to Tokyo. Even more precise if the district or even the complete address were given. But if it turns out that she's going to Kyoto instead of Tokyo, then the previous information about the destination city is not accurate, although still more precise than just the destination country.

Expression of the same numeric value but in different base number would give us different precision.

In general usage, it should be possible to express information with high precision independently from accuracy. There are accurate but imprecise information. On the other hand, there are also precise but inaccurate information.

This video tries to distinct them.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/06/2021 08:59:01
Voluntarist Epistemology

This video also contains an example of balancing between accuracy and precision, especially from 26:40 to 31:26

Quote
According to Bas van Fraassen's voluntarist epistemology, the only constraint on rational belief is consistency. Beyond this, our beliefs must be guided not by rules of reason, but by the passions: emotions, values, and intuitions. This video examines the grounds for voluntarism in the failure of traditional epistemology, and in the need for an epistemology that can properly accommodate conceptual revolutions. Then I turn to the objections to voluntarism.

Outline of voluntarism:
0:00 - Introduction
4:02 - Why consistency?
8:13 - Failure of traditional epistemology
18:37 - Voluntarism against skepticism
31:26 - Conceptual revolution and objectifying epistemology
Objections to voluntarism:
48:38 - Arbitrariness
53:00 - Too permissive?
1:01:34 - Too conservative?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/06/2021 12:46:36
Expression of the same numeric value but in different base number would give us different precision.
Since binary is the smallest base number, it would be preferred to express precision.  So, the precision of an information depends on how many bits its content is.
In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.

https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5858d28deea4237a7c1320f7e649fb104aecb0e5)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/908c155d6002beadf2df5a7c05e954ec2373ca16)

https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/IEEE_754_Double_Floating_Point_Format.svg/618px-IEEE_754_Double_Floating_Point_Format.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/61345d47f069d645947b9c0ab676c75551f1b188)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5f677b27f52fcd521355049a560d53b5c01800e1)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 05:17:42
The Longest DNA in the Animal Kingdom Found - Not What I Expected

DNA is the largest information storage method provided by nature. Studying how it works is highly important.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 09:43:31
Expression of the same numeric value but in different base number would give us different precision.
Since binary is the smallest base number, it would be preferred to express precision.  So, the precision of an information depends on how many bits its content is.
In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.

https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5858d28deea4237a7c1320f7e649fb104aecb0e5)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/908c155d6002beadf2df5a7c05e954ec2373ca16)

https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/IEEE_754_Double_Floating_Point_Format.svg/618px-IEEE_754_Double_Floating_Point_Format.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/61345d47f069d645947b9c0ab676c75551f1b188)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5f677b27f52fcd521355049a560d53b5c01800e1)
It's clear that bits in different positions in the floating point representation have different significance in determining the numeric value of the data. The significance of the bit can be defined as the difference of the data value caused by its flipping between 0 and 1. In general, they are sorted from highest to lowest significance (from left to right position in writing); except for sign bit, whose significance depends on the value determined by other bits. If it's small, then the sign bit has low significance. On the other hand, if the value from other bits is big, the sign bit has high significance.
 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 21:55:45
In real life experience, we often get/use numerical information with even lower precision than what's expressed by single precision floating point. In many applications, it's enough to write π as 3.14.
In floating point representation, 3 digit of decimal number can be written using 10 bits of fraction part. The rest of the bits are rounded to 0, whose actual value we don't care.
By defining precision as quantity of information, we can use it in numeric as well non-numeric data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 23:25:53
As I mentioned earlier,  actual/practical precision of an information also depends on the assumptions assigned to it. For example, if I say that your car key is in Waldo's pocket, you would be able to quickly find it, as long as you can find Waldo first. In this case, my explicit statement only contains a few bits of information. But it can become highly precise when it's combined with correct assumptions not expressed in my statement. Like which Waldo I'm talking about.
Another example, if I say that the value of x equals 2π, modern people would recognize it with very high precision. It's because the symbols carry almost unambiguous meaning in modern world. It would be different in ancient times.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 05:37:56
The next problem is the accuracy of the information. Let's start with a non-numeric case, such as finding Waldo in a picture.
(https://fiverr-res.cloudinary.com/images/q_auto,f_auto/gigs/140639081/original/7e7a04151cd0f368c6d56e4fd7abf5d02897b4e4/find-wally-or-waldo-for-you.jpg)

Saying that Waldo is in the picture is accurate, but not precise.
Saying that Waldo is at the bottom right corner of the picture is more precise, but not accurate.
Saying that Waldo is around the center of the picture, not far away from the red tent is more accurate and precise.

The first and third statements are accurate because they include the true value of Waldo's position.
The second statement becomes inaccurate because it excludes the true value of Waldo's position.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 05:43:49
The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust
Quote
The Cerebras Wafer-Scale Engine is 8.5 inches wide and contains 1.2 trillion transistors. The next biggest chip, the NVIDIA A100 GPU, measures one inch at a time and has only 54 billion transistor. The WSE has made its way into a handful of supercomputing labs, including the National Energy Technology Laboratory. Researchers pitted the chip against a supercomputer in a fluid dynamics simulation and found it to be faster than the supercomputer. The team said that the chip completed a combustion simulation in a power plant approximately 200 times faster.

Joule is the 81st fastest supercomputer in the world, with a price tag of $1.2 billion. The WSE is bigger than the average supercomputer, and it's all about design. The company uses couriers to send and collect documents from other branches and archives across the city. It's like an old-fashioned company doing all its business on paper, but on silicon wafers, and the process takes place within a silicon wafer, not a sheet of paper. The CS-1 is the world's largest supercomputer.

Cerebras has developed a chip that can handle problems small enough to fit on a wafer. The megachip is far more efficient than a traditional supercomputer that needs a ton of traditional chips to be networked. Next-generation chip will have 2,6 trillion transistors, 850,00 cores, and more than double the memory. It still remains to be seen whether wafer-scale computing really does take off, but Cerebras is the first to seriously pursue it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 06:16:37
The next problem is the accuracy of the information. Let's start with a non-numeric case, such as finding Waldo in a picture.
Unlike precision, which can be determined without knowing the true value of the information, accuracy cannot be determined without knowing the true value of the information.
Saying that π is more than 0 is accurate because it doesn't contain false information. But saying that it's less than 3.141 is not accurate because it contains false information.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/06/2021 10:11:48
Precision of an information should be considered as the amount of uncertainty that it can remove. Number of bits alone is not adequate.
Here is an example.
Many bits in first statement don't  remove more uncertainty compared to fewer bits in the second statement. So, we can't say that the first statement has higher precision than the second, although it contains many more bits. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/06/2021 11:02:48
The World’s Most Powerful Supercomputer Is Almost Here

Quote
The next generation of computing is on the horizon, and several new machines may just smash all the records...with two nations neck and neck in a race to get there first.

The ENIAC was capable of about 400 FLOPS. FLOPS stands for floating-point operations per second, which basically tells us how many calculations the computer can do per second. This makes measuring FLOPS a way of calculating computing power.

So, the ENIAC was sitting at 400 FLOPS in 1945, and in the ten years it was operational, it may have performed more calculations than all of humanity had up until that point in time—that was the kind of leap digital computing gave us. From that 400 FLOPS we upgraded to 10,000 FLOPS, and then a million, a billion, a trillion, a quadrillion FLOPS. That’s petascale computing, and that’s the level of today’s most powerful supercomputers.

But what’s coming next is exascale computing. That’s zeroes. 1 quintillion operations per second. Exascale computers will be a thousand times better performing than the petascale machines we have now. Or, to put it another way, if you wanted to do the same number of calculations that an exascale computer can do in ONE second...you’d be doing math for over 31 billion years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/06/2021 03:16:37
The virtual universe is useless unless it can be expressed into actions in objective reality. 3D printing improves the interface between those two universes.


Quote
Three-dimensional printing promises new opportunities for more sustainable and local production. But does 3D printing make everything better? This film shows how innovation can change the world of goods.

Is the way we make things about to become the next revolution? Traditional manufacturing techniques like milling, casting and gluing could soon be replaced by 3D printing -saving enormous amounts of material and energy. Aircraft maker Airbus is already benefiting from the new manufacturing method. Beginning this year, the A350 airliner will fly with printed door locking shafts. Where previously ten parts had to be installed, today that’s down to just one. It saves a lot of manufacturing steps. And 3D printing can imitate nature's efficient construction processes, something barely possible in conventional manufacturing. Another benefit of the new technology is that components can become significantly lighter and more robust, and material can be saved during production. But the Airbus development team is not yet satisfied. The printed cabin partition in the A350 has become 45 percent lighter thanks to the new structure, but it is complex and expensive to manufacture. It takes 900 hours to print just one partition, a problem that print manufacturers have not yet been able to solve. The technology is already being used in Adidas shoes: The sportswear company says it is currently the world’s largest manufacturer of 3D-printed components. The next step is sustainable materials, such as biological synthetic resins that do not use petroleum and can be liquefied again without loss of quality and are therefore completely recyclable. This documentary sheds light on the diverse uses of 3D printing.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 22/06/2021 06:02:32
About as far away as when we first started. As one diode said to the other we have been together for so long and I still don't know you.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/06/2021 10:50:05
Precision of an information should be considered as the amount of uncertainty that it can remove. Number of bits alone is not adequate.
Here is an example.
  • 2.99999999... ≤ π ≤3.9999999...
  • 3 ≤ π ≤ 4
Many bits in first statement don't  remove more uncertainty compared to fewer bits in the second statement. So, we can't say that the first statement has higher precision than the second, although it contains many more bits. 
It looks like the equation sign implicitly puts two limits at once, which are low and high limits of the value. When we say that two values are identical,  or exactly the same by definition, we can use  ≡ symbol. But if they are approximately equal, we use ≈ symbol. It means that we acknowledge that there are cases where the difference can't be neglected.

The usage of = symbol then leaves some ambiguity. The values involved in the equation are not necessarily identical, but  the difference between them must be negligible in almost all cases.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/06/2021 10:14:10
https://towardsdatascience.com/cant-access-gpt-3-here-s-gpt-j-its-open-source-cousin-8af86a638b11
Similar to GPT-3, and everyone can use it.

Quote

ARTIFICIAL INTELLIGENCE
Can’t Access GPT-3? Here’s GPT-J — Its Open-Source Cousin
Similar to GPT-3, and everyone can use it.

The AI world was thrilled when OpenAI released the beta API for GPT-3. It gave developers the chance to play with the amazing system and look for new exciting use cases. Yet, OpenAI decided not to open (pun intended) the API to everyone, but only to a selected group of people through a waitlist. If they were worried about the misuse and harmful outcomes, they’d have done the same as with GPT-2: not releasing it to the public at all.
It’s surprising that a company that claims its mission is “to ensure that artificial general intelligence benefits all of humanity” wouldn’t allow people to thoroughly investigate the system. That’s why we should appreciate the work of people like the team behind EleutherAI, a “collective of researchers working to open source AI research.” Because GPT-3 is so popular, they’ve been trying to replicate the versions of the model for everyone to use, aiming at building a system comparable to GPT-3-175B, the AI king. In this article, I’ll talk about EleutherAI and GPT-J, the open-source cousin of GPT-3. Enjoy!
Quote
GPT-J is 30 times smaller than GPT-3-175B. Despite the large difference, GPT-J produces better code, just because it was slightly more optimized to do the task. This implies that optimization towards improving specific abilities could give rise to systems that are way better than GPT-3. And this isn’t limited to coding: we could create for every task, a system that would top GPT-3 with ease. GPT-3 would become a jack of all trades, whereas the specialized systems would be the true masters.
This hypothesis goes in line with the results OpenAI researchers Irene Solaiman and Christy Dennison got from PALMS. They fine-tuned GPT-3 with a small curated dataset to prevent the system from producing biased outputs and got amazing results. In a way, it was an optimization; they specialized GPT-3 to be unbiased — as understood by ethical institutions in the U.S. It seems that GPT-3 isn’t only very powerful, but that a notable amount of power is still latent within, waiting to be exploited by specialization.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/06/2021 13:11:13
GPT-J is 30 times smaller than GPT-3-175B. Despite the large difference, GPT-J produces better code, just because it was slightly more optimized to do the task. This implies that optimization towards improving specific abilities could give rise to systems that are way better than GPT-3.
It looks like the way to general intelligence is by combining several neural networks trained separately for specific tasks. A dedicated network would be needed to determine which part would be suitable to solve the problem at hand.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/06/2021 21:44:21
What he's trying to build is basically similar to a virtual universe. Note that this video was uploaded 7 years ago.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/07/2021 11:22:31
https://www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code

Quote
GitHub and OpenAI have launched a technical preview of a new AI tool called Copilot, which lives inside the Visual Studio Code editor and autocompletes code snippets.

Copilot does more than just parrot back code it’s seen before, according to GitHub. It instead analyzes the code you’ve already written and generates new matching code, including specific functions that were previously called. Examples on the project’s website include automatically writing the code to import tweets, draw a scatterplot, or grab a Goodreads rating.

Quote
GitHub sees this as an evolution of pair programming, where two coders will work on the same project to catch each others’ mistakes and speed up the development process. With Copilot, one of those coders is virtual.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/07/2021 03:45:23
https://jrodthoughts.medium.com/objects-that-sound-deepminds-research-show-how-to-combine-vision-and-audio-in-a-single-model-c4051ea21495
Quote

Since we are babies, we intuitively develop the ability to correlate the input from different cognitive sensors such as vision, audio and text. While listening to a symphony we immediately visualize an orchestra or when admiring a landscape painting, our brain associates the visual with specific sounds. The relationships between images, sounds and texts are dictated by connections between different sections of the brain responsible from analyzing specific cognitive input. In that sense, you can say that we are hardwired to learn simultaneously from multiple cognitive signals. Despite the advancements in different deep learning areas such as image, language and sound analysis, most neural networks remain specialized on a single input data type. A few years ago, researchers from Alphabet’s subsidiary DeepMind published a research paper proposing a method that can simultaneously analyze audio and visual inputs and learn the relationships between objects and sounds in a common environment.
(https://miro.medium.com/max/2100/1*hFzT9BNIL6FopN9tkch29w.png)
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 10:25:20
Does PlayStation count.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 13:00:11
Does PlayStation count.
It does help improving the technology and accumulating financial resources for that. Although their main purpose may not be directly correlated.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 13:03:48
Although their main purpose may not be directly correlated.
What about Xbox.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 13:12:26
Although their main purpose may not be directly correlated.
What about Xbox.
Same story.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 14:09:06
Although their main purpose may not be directly correlated.
I just thought of something. What if we are already in a virtual universe we will have to try and build a real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 15:12:21
Although their main purpose may not be directly correlated.
I just thought of something. What if we are already in a virtual universe we will have to try and build a real universe.
As long as we have no reliable way to proof otherwise, it's better for us to assume that we're living in reality. Descartes' Cogito tells us that our own consciousness is the only self evident proof of our existence.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 15:42:30
As long as we have no reliable way to proof otherwise, it's better for us to assume that we're living in reality. Descartes' Cogito tells us that our own consciousness is the only self evident proof of our existence.
I think it is safe to assume that our consciousness is merely a circuit board plugged into the motherboard that is programmed to make some decisions inside the virtual reality life that we only virtually think we have. I could be wrong but if I am then that would be a falt in the electronics of the virtual reality machine. eg. When I get a headache this can be due to computer overload.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 05:40:06
I think it is safe to assume that our consciousness is merely a circuit board plugged into the motherboard that is programmed to make some decisions inside the virtual reality life that we only virtually think we have. I could be wrong but if I am then that would be a falt in the electronics of the virtual reality machine. eg. When I get a headache this can be due to computer overload.
I don't think that you are safe thinking that way. Imagine you are a bit drunk on your bed, staring out of your window. You see an asteroid flying right in your direction. You're not sure if it's real or you're just dreaming, or you are just living in a simulation. There's apparently not enough time to determine which one is true.

The best bet is by assuming that it's real, and you should get out as fast as you can. Even if you're wrong, the result would be less detrimental than assuming otherwise.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 06:08:08
I don't think that you are safe thinking that way.
I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 07:46:29
I don't think that you are safe thinking that way.
I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
In my previous example I was thinking about a small asteroid capable of destroying a house.
A virtual universe, or even a nested virtual universe, eventually must be build upon a real universe. It's impossible for a virtual universe to exist when no real universe is there.
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 08:08:21
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 11:15:44
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 11:32:26
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
I see what you're saying but the flight simulator could be dangerous as the captain could spill hot coffee on his lap or even worse. He will learn not to do that in the real univers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 12:22:48
You can kill thousands of people in GTA or Total War without being evil in real life.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 12:37:34
You can kill thousands of people in GTA or Total War without being evil in real life.
I don't like violent games they incite violence in the real universe. But I do get your point thank you.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 12:48:24
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 13:30:05
Is it possible to build a virtual universe?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 14:56:36
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Any scalable virtual universe must be built as vectors or tensors instead of pixels, especially when it's multidimensional.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 15:05:06
Is it possible to build a virtual universe?
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 15:29:03
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/07/2021 19:47:02
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
What I meant was not about world simulation like Matrix the movie. They are more mundane and narrow purposed, such as Google earth, climate simulation, alphafold, Tesla's Dojo and vertical integration, Microsoft Flight Simulator, SAP ERP, Chinese government's surveillance system, Estonia's digital governance, financial/banking systems, crypto currency, Virtual Machines to manage workstations, etc. They try to represent some aspects of objective reality for easier access to extract information, aggregate and manage them, and help with decision making process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/07/2021 19:49:47
"Exclusive Q&A: Neuralink’s Quest to Beat the Speed of Type - IEEE Spectrum" https://spectrum.ieee.org/tech-talk/biomedical/bionics/exclusive-neuralinks-goal-of-bestinworld-bmi
Quote
Elon Musk’s brain tech company, Neuralink, is subject to rampant speculation and misunderstanding. Just start a Google search with the phrase “can Neuralink...” and you’ll see the questions that are commonly asked, which include “can Neuralink cure depression?” and “can Neuralink control you?” Musk hasn’t helped ground the company’s reputation in reality with his public statements, including his claim that the Neuralink device will one day enable “AI symbiosis” in which human brains will merge with artificial intelligence.

It’s all somewhat absurd, because the Neuralink brain implant is still an experimental device that hasn’t yet gotten approval for even the most basic clinical safety trial.

But behind the showmanship and hyperbole, the fact remains that Neuralink is staffed by serious scientists and engineers doing interesting research. The fully implantable brain-machine interface (BMI) they’ve been developing is advancing the field with its super-thin neural “threads” that can snake through brain tissue to pick up signals and its custom chips and electronics that can process data from more than 1000 electrodes.
Quote
IEEE Spectrum: Elon Musk often talks about the far-future possibilities of Neuralink; a future in which everyday people could get voluntary brain surgery and have Links implanted to augment their capabilities. But whom is the product for in the near term?

Joseph O’Doherty: We’re working on a communication prosthesis that would give back keyboard and mouse control to individuals with paralysis. We’re pushing towards an able-bodied typing rate, which is obviously a tall order. But that’s the goal.

We have a very capable device and we’re aware of the various algorithmic techniques that have been used by others. So we can apply best practices engineering to tighten up all the aspects. What it takes to make the BMI is a good recording device, but also real attention to detail in the decoder, because it’s a closed-loop system. You need to have attention to that closed-loop aspect of it for it to be really high performance.

We have an internal goal of trying to beat the world record in terms of information rate from the BMI. We’re extremely close to exceeding what, as far as we know, is the best performance. And then there’s an open question: How much further beyond that can we go?

My team and I are trying to meet that goal and beat the world record. We’ll either nail down what we can, or, if we can’t, figure out why not, and how to make the device better.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 07/07/2021 00:10:09
Thank you my friend that is very interesting information I think medical science and I.T is making great progress we will have to see what the future holds.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/07/2021 05:20:27
https://venturebeat.com/2021/07/05/the-future-of-deep-learning-according-to-its-pioneers/
Quote
In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning.

Titled “Deep Learning for AI,” the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems.

Quote
In their paper, Bengio, Hinton, and LeCun acknowledge these shortcomings. “Supervised learning, while successful in a wide variety of tasks, typically requires a large amount of human-labeled data. Similarly, when reinforcement learning is based only on rewards, it requires a very large number of interactions,” they write.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 07/07/2021 14:21:48
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/07/2021 10:51:06

Moore's Law is dead, right? Not if we can get working photonic computers.

Lightmatter is building a photonic computer for the biggest growth area in computing right now, and according to CEO Nick Harris, it can be ordered now and will ship at the end of this year. It's already much faster than traditional electronic computers a neural nets, machine learning for language processing, and AI for self-driving cars.

It's the world's first general purpose photonic AI accelerator, and with light multiplexing -- using up to 64 different colors of light simultaneously -- there's long path of speed improvements ahead.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/07/2021 10:59:41
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
At first the interface would likely be similar to currently existing human-machine interfaces, such as monitor, camera, keyboard, mouse, touchscreen, speaker, microphone, VR and AR. But eventually, as direct brain interface gets better and reliable, those devices will be slowly replaced due to their speed limitation which will become a communication bottleneck.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 08/07/2021 11:11:13
At first the interface would likely be similar to currently existing human-machine interfaces,
Thank you for the info hamdani, Looks like good things on the way. We will be like kids again.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/07/2021 05:56:58
At first the interface would likely be similar to currently existing human-machine interfaces,
Thank you for the info hamdani, Looks like good things on the way. We will be like kids again.
Some parts of the virtual universe would be intended to represent objective reality as it is, as acurate and precise as possible. The other parts would try to simulate as much as possible the consequences of our decisions, to try to achieve best case and avoid worst case scenario. It's similar to the mind of chess players who memorize current position while figuring out their possible next moves and their opponents' replies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/07/2021 09:44:01
Quote
Scientists have made great progress to decode thoughts with artificial intelligence. In this video I summarize the most exciting recent developments.

The first paper about inferring the meaning of nouns is:

Mitchell et el
"Predicting Human Brain Activity Associated with the Meanings of Nouns"
Science, 1191-1195 (2008)
https://science.sciencemag.org/content/320/5880/1191

The paper about extracting speech from brain readings is:

Anumanchipalli, Chartier, & Chang
"Speech synthesis from neural decoding of spoken sentences"
Nature 568, 493–498 (2019)
https://www.nature.com/articles/s41586-019-1119-1?fbclid=IwAR0yFax5f_drEkQwOImIWKwCE-xdglWzL8NJv2UN22vjGGh4cMxNqewWVSo

There are more examples of the reconstructed sentences here:

https://www.ucsf.edu/news/2019/04/414296/synthetic-speech-generated-brain-recordings

The paper about extracting images from brain readings is:
Shen et al
PLoS Comput Biol. 15(1): e1006633 (2019)
https://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1006633

And the brain to text paper using handwriting is:

Willett et al
High-performance brain-to-text communication via handwriting
Nature 593, 249–254 (2021)
https://www.nature.com/articles/s41586-021-03506-2

0:00 Intro
0:33 How to measure brain activity
2:44 Brain to Word
5:42 Brain to Image
6:30 Brain to Speech
7:25 Brain to Text
8:29 Better ways to measure brain activity
10:20 Sponsor Message
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/07/2021 06:47:42
And this video shows how our model of reality can affect our decisions, with consequences that we will get in the future.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/07/2021 06:51:03

Is artificial intelligence replacing lawyers and judges? Throwback to Ronny Chieng’s report on how robots are taking over the legal system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/07/2021 10:03:12
Quote
The government is introducing what it terms the 'social credit score scheme' in Hangzhou, China. The system will monitor everything from traffic offenses to how people handle their parents. It is currently being piloted in the eastern provincial capital of Hangzhou but has not yet been implemented. The government uses blacklists to limit people's actions or to refuse them such programs. The structure could create all sorts of rifts between neighbors, employers, and even mates.

Social feedback results would come in part from 'residential committees' responsible for tracking and documenting people's behavior. Social credit ratings were already rolled out in 2020 and now due to events of the recent year have only accelerated its widespread adoption. It remains to be seen if the fear of a low score would be enough to alter people's actions outside of limiting travel, regardless of government databases.
For these scenario to be successful and sustainable, the government as well as the people need to understand the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/07/2021 07:27:43
https://tech.fb.com/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech-communication/

Quote
Tags: ARaugmented realityBCIbrain-computer interfaceFacebook Reality LabsFRLhardwareUCSF
 
TL;DR: Today, we’re excited to celebrate milestone new results published by our UCSF research collaborators in The New England Journal of Medicine, demonstrating the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting to speak. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the vocal tract. This study marks an important milestone for the field of neuroscience, and it concludes Facebook’s years-long collaboration with UCSF’s Chang Lab.

These groundbreaking results show what’s possible — both in clinical settings like Chang Lab, and potentially for non-invasive consumer applications such as the optical BCI we’ve been exploring over the past four years.

To continue fostering optical BCI explorations across the field, we want to take this opportunity to open source our BCI software and share our head-mounted hardware prototypes to key researchers and other peers to help advance this important work. In the meantime, Facebook Reality Labs will focus on applying BCI concepts to our electromyography (EMG) research to dramatically accelerate wrist-based neural interfaces for intuitive AR/VR input.

The room was full of UCSF scientists and equipment — monitors and cables everywhere. But his eyes were fixed on a single screen displaying two simple words: “Good morning!”

Though unable to speak, he attempted to respond, and the word “Hello” appeared.

The screen went black, replaced by another conversational prompt: “How are you today?”

This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.

A simple conversation, yet it amounted to a significant milestone in the field of neuroscience. More importantly, it was the first time in over 16 years that he’d been able to communicate without having to use a cumbersome head-mounted apparatus to type out what he wanted to say, after experiencing near full paralysis of his limbs and vocal tract following a series of strokes. Now he simply had to attempt speaking, and a computer could share those words in real time — no typing required.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 15/07/2021 12:29:22
This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.
That is amazing outright mindreading in action.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/07/2021 01:21:35
That is amazing outright mindreading in action.
When the technology is refined, it can revolutionize our communication. All conversation in this thread would be finished in just a few seconds.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/07/2021 01:28:39
https://scitechdaily.com/the-virus-trap-hollow-nano-objects-made-of-dna-could-trap-viruses-and-render-them-harmless/
Quote
To date, there are no effective antidotes against most virus infections. An interdisciplinary research team at the Technical University of Munich (TUM) has now developed a new approach: they engulf and neutralize viruses with nano-capsules tailored from genetic material using the DNA origami method. The strategy has already been tested against hepatitis and adeno-associated viruses in cell cultures. It may also prove successful against coronaviruses.

There are antibiotics against dangerous bacteria, but few antidotes to treat acute viral infections. Some infections can be prevented by vaccination but developing new vaccines is a long and laborious process.

Now an interdisciplinary research team from the Technical University of Munich, the Helmholtz Zentrum München, and the Brandeis University (USA) is proposing a novel strategy for the treatment of acute viral infections: The team has developed nanostructures made of DNA, the substance that makes up our genetic material, that can trap viruses and render them harmless.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/07/2021 07:59:16
Why “probability of 0” does not mean “impossible” | Probabilities of probabilities, part 2

Quote
Curious about measure theory?  This does require some background in real analysis, but if you want to dig in, here is a textbook by the always great Terence Tao.
https://terrytao.files.wordpress.com/...

Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 20/07/2021 18:54:58
Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
I found the video very difficult to understand As my brain is not wired for this logic. I can understand simple statistics and likelihoods as with the coin flip my way of seeing this is that the likelihood hood of the coin landing on the same side 10x is 1 in 1,024 ore the likelihood of the coin landing 5 up and 5 down is 50 50. The likelihood is a simple satistical chance and is by no means a constant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/07/2021 08:35:59
Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
I found the video very difficult to understand As my brain is not wired for this logic. I can understand simple statistics and likelihoods as with the coin flip my way of seeing this is that the likelihood hood of the coin landing on the same side 10x is 1 in 1,024 ore the likelihood of the coin landing 5 up and 5 down is 50 50. The likelihood is a simple satistical chance and is by no means a constant.
Try this.
https://www.omnicalculator.com/statistics/coin-flip-probability
(https://www.thenakedscientists.com/forum/index.php?action=dlattach;topic=77747.0;attach=32208)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/07/2021 05:56:12
https://scitechdaily.com/deepmind-releases-accurate-picture-of-the-human-proteome-the-most-significant-contribution-ai-has-made-to-advancing-scientific-knowledge-to-date/
Quote
DeepMind and EMBL release the most complete database of predicted 3D structures of human proteins.

Partners use AlphaFold, the AI system recognized last year as a solution to the protein structure prediction problem, to release more than 350,000 protein structure predictions including the entire human proteome to the scientific community.

DeepMind today announced its partnership with the European Molecular Biology Laboratory (EMBL), Europe’s flagship laboratory for the life sciences, to make the most complete and accurate database yet of predicted protein structure models for the human proteome. This will cover all ~20,000 proteins expressed by the human genome, and the data will be freely and openly available to the scientific community. The database and artificial intelligence system provide structural biologists with powerful new tools for examining a protein’s three-dimensional structure, and offer a treasure trove of data that could unlock future advances and herald a new era for AI-enabled biology.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/07/2021 06:16:59
https://www.bbc.co.uk/news/technology-57942909
Quote
Mark Zuckerberg has laid out his vision to transform Facebook from a social media network into a “metaverse company” in the next five years.

A metaverse is an online world where people can game, work and communicate in a virtual environment, often using VR headsets.

The Facebook CEO described it as “an embodied internet where instead of just viewing content - you are in it”.
It looks like it's closer than many of us are thinking.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/07/2021 12:59:23
https://neurosciencenews.com/aging-junk-dna-18975/
Potential Role of ‘Junk DNA’ Sequence in Aging and Cancer Identified
Quote
Summary: VNTR2-1, a recently identified region of DNA, appears to drive the activity of the telomerase gene. The telomerase gene has previously been found to prevent aging in specific cells.

Source: Washington State University
Quote
The telomerase gene controls the activity of the telomerase enzyme, which helps produce telomeres, the caps at the end of each strand of DNA that protect the chromosomes within our cells. In normal cells, the length of telomeres gets a little bit shorter every time cells duplicate their DNA before they divide. When telomeres get too short, cells can no longer reproduce, causing them to age and die.

However, in certain cell types–including reproductive cells and cancer cells–the activity of the telomerase gene ensures that telomeres are reset to the same length when DNA is copied. This is essentially what restarts the aging clock in new offspring but is also the reason why cancer cells can continue to multiply and form tumors.

Knowing how the telomerase gene is regulated and activated and why it is only active in certain types of cells could someday be the key to understanding how humans age, as well as how to stop the spread of cancer. That is why Zhu has focused the past 20 years of his career as a scientist solely on the study of this gene.

Zhu said that his team’s latest finding that VNTR2-1 helps to drive the activity of the telomerase gene is especially notable because of the type of DNA sequence it represents.

“Almost 50% of our genome consists of repetitive DNA that does not code for protein,” Zhu said. “These DNA sequences tend to be considered as ‘junk DNA’ or dark matters in our genome, and they are difficult to study. Our study describes that one of those units actually has a function in that it enhances the activity of the telomerase gene.”

Their finding is based on a series of experiments that found that deleting the DNA sequence from cancer cells–both in a human cell line and in mice–caused telomeres to shorten, cells to age, and tumors to stop growing.