Naked Science Forum

On the Lighter Side => New Theories => Topic started by: hamdani yusuf on 21/09/2019 09:50:36

Title: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2019 09:50:36
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.

Title: Re: How close are we from building a virtual universe?
Post by: jeffreyH on 21/09/2019 11:06:32
How detailed should the virtual universe be? Does it only include the observable universe? Depending upon the detail and scale it could require more information to describe it than the universe actually contains.

A better model would study a well defined region of the universe such as a galaxy cluster. However, this would still depend upon the level of detail.
Title: Re: How close are we from building a virtual universe?
Post by: evan_au on 21/09/2019 22:37:58
There is a definite tradeoff between level of detail, computer power and memory storage.

If you have a goal of studying the general shape of the universe, it is important to have dark matter and normal matter (which clumps into galaxies). But modelling individual stars is not needed.

If you are studying the shape of the galaxy, you don't need to model the lifecycle of the individual stars.

If you are studying the orbits of the planets around the Sun, you don't need to model whether or not Earth hosts life.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/09/2019 04:12:47
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA, social media.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/09/2019 04:23:45
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
Title: Re: How close are we from building a virtual universe?
Post by: jeffreyH on 22/09/2019 13:56:17
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.

You are talking about disparate systems. They are also human centric and not universe centric.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/09/2019 04:34:54
Building a virtual universe is just an instrumental goal, which is meant to increase the chance to achieve the terminal goal by improving the effectiveness and efficiency of our actions. I discussed these goals in another thread about universal morality. Here I want to discuss more about technical issues.
Efforts for virtualization of objective reality has already started, but currently they are mostly partial, either by location or function. Some examples are google map, ERP software, online banking, e-commerce, wikipedia, crypto currency, CAD software, SCADA.
Their lack of integration may lead to data duplication when different systems are representing the same object from different point of view. When they are not updated simultaneously, they will produce inconsistency, which may lead to incorrect decision makings.

You are talking about disparate systems. They are also human centric and not universe centric.
They are disparate now, but there are already efforts to integrate them. Some ERP systems have been connected to Plant Information Management System, which in turn can be connected to SCADA, DCS, PLC, and even smart field devices, such as transmitter, control valve positioners and variable speed drives.
What we need is a common platform to store those information in the same or compatible format, so any update in one subsystem can be automatically update in related subsystems to guarantee data integrity. The common platform must also take care of user accountability and data accessibility.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2019 11:20:57
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/10/2019 14:20:30
My basic idea for building a virtual universe is representing physical objects as nodes which are then organized in hierarchical structure. It is like a Unix feature, where everything is a file. But here, everything is a node.
To address is-ought problem, another hierarchical structure is created to represent desired/designed conditions.
A relationship table is created to show assignment of physical objects to designed objects. It also saves additional relationship types between them if necessary. Another relationship tables are added to show relationships among nodes other than the main hierarchical structures.
Another hierarchical structure is created to represent activities/events, which are basically any changes of nodes in those hierarchical structures of physical and desired objects. The activity nodes have timestamps for start and finish.
I have built a prototype for this system based on a DCS configuration database, which are then expanded to accomodate other things beyond I/O assignments, physical network, and control strategies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:10:31
The universe as we know it is a dynamic system, which means it changes in time.  So, for a virtual universe to be useful, it also needs to be a dynamic system. Static systems such as paper maps or ancient human's cave painting can only have limited usage for narrow purposes.
Title: Re: How close are we from building a virtual universe?
Post by: Bored chemist on 23/01/2020 07:16:04
There is a virtual universe in your head.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:21:49
Building a virtualization of objective reality in high precision takes a lot of resources in the form of data storage and communication bandwith. Hence the system needs to maximize information density.
Here is an interesting excerpts from Ray Kurzweil's book "Singularity Is Near" regarding order and complexity, which are closely related to information density.
Quote
Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount of
information required to represent a process. Let's say you have a design for a system (for example, a computer
program or a computer-assisted design file for a computer), which can be described by a data file containing one
million bits. We could say your design has a complexity of one million bits. But suppose we notice that the one million
bits actually consist of a pattern of one thousand bits that is repeated one thousand times. We could note the
repetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducing
the size of the file by a factor of about one thousand.
The most popular data-compression techniques use similar methods of finding redundancy within information.3
But after you've compressed a data file in this way, can you be absolutely certain that there are no other rules or
methods that might be discovered that would enable you to express the file in even more compact terms? For example,
suppose my file was simply "pi" (3.1415...) expressed to one million bits of precision. Most data-compression
programs would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binary
expression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.
But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or that
portion of it) very compactly as "pi to one million bits of accuracy." Since we can never be sure that we have not
overlooked some even more compact representation of an information sequence, any amount of compression sets only
an upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity along
these lines. He defines the "algorithmic information content" (Ale) of a set of information as "the length of the shortest
program that will cause a standard universal computer to print out the string of bits and then halt."4
However, Gell-Mann's concept is not fully adequate. If we have a file with random information, it cannot be
compressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random.
However, if any random sequence will do for a particular design, then this information can be characterized by a
simple instruction, such as "put random sequence of numbers here." So the random sequence, whether it's ten bits or
one billion bits, does not represent a significant amount of complexity, because it is characterized by a simple
instruction. This is the difference between a random sequence and an unpredictable sequence of information that has
purpose.
To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were to
characterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of every atom in the
rock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 1025 atoms which, as I will
discuss in the next chapter, can hold up to 1027 bits of information. That's one hundred million billion times more
information than the genetic code of a human (even without compressing the genetic code).5 But for most common
purposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock for
most purposes with far less information just by specifying its shape and the type of material of which it is made. Thus,
it is reasonable to consider the complexity of an ordinary rock to be far less than that of a human even though the rock
theoretically contains vast amounts of information.6
One concept of complexity is the minimum amount of meaningful, non-random, but unpredictable information
needed to characterize a system or process.
In Gell-Mann's concept, the AlC of a million-bit random string would be about a million bits long. So I am adding
to Gell-Mann's AlC concept the idea of replacing each random string with a simple instruction to "put random bits"
here.
However, even this is not sufficient. Another issue is raised by strings of arbitrary data, such as names and phone
numbers in a phone book, or periodic measurements of radiation levels or temperature. Such data is not random, and
data-compression methods will only succeed in reducing it to a small degree. Yet it does not represent complexity as
that term is generally understood. It is just data. So we need another simple instruction to "put arbitrary data sequence"
here.
To summarize my proposed measure of the complexity of a set of information, we first consider its AlC as Gell-
Mann has defined it. We then replace each random string with a simple instruction to insert a random string. We then
do the same for arbitrary data strings. Now we have a measure of complexity that reasonably matches our intuition.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2020 07:25:54
There is a virtual universe in your head.
Indeed, but it only covers a small portion of even the currently observable universe. A lot of information that I had ever known has already lost. In order to be useful for predicting events in the far future, we need a much larger and complex system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2020 08:22:35
Regarding the original question, it turns out that Ray Kurzweil has already predict the answer, which is around mid of this century.

Quote
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[39] Kurzweil believes that the singularity will occur by approximately 2045.[40] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.
https://en.wikipedia.org/wiki/Technological_singularity#Accelerating_change
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/02/2020 11:02:21
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

https://en.wikipedia.org/wiki/Graph_theory
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/02/2020 10:02:07
https://www.technologyreview.com/s/615189/what-ai-still-cant-do/
Quote
Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.

In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.
Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.

Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
Quote
AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.
Judea Pearl
A virtual universe can map commonly known cause and effect relationships to be used as library by AI agents, which will save a lot of time training them from the beginning everytime a new AI agent is assigned.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/03/2020 07:02:22
To achieve generality, an AI is required to adapt to various range of situations. It would be better to have modular structure for frequently used basic functions similar to the configuration of naturally occured brains. It must have some flexibility upon its own hyperparameters, which might require changes for executing different tasks.
To maintain its own integrity, and fight off data corruption or cyber attacks, the AI needs to spend some of its data storage and processing capacity to represent its own structure. This will create some sort of self awareness, which is a step towards artificial consciousness.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/04/2020 09:54:24
The article below has reminded me once again of the importance of having a universal modelling system/platform.
Quote
COBOL, a 60-year-old computer language, is in the COVID-19 spotlight
As state governments seek to fix overwhelmed unemployment benefit systems, they need programmers skilled in a language that was passé by the early 1980s.

Some states have found themselves in need of people who know a 60-year-old programming language called COBOL to retrofit the antiquated government systems now struggling to process the deluge of unemployment claims brought by the coronavirus crisis.

The states of Kansas, New Jersey, and Connecticut all experienced technical meltdowns after a stunning 6.6 million Americans filed for unemployment benefits last week.

They might not have an easy time finding the programmers they need. There just aren’t that many people around these days who know COBOL, or Common Business-Oriented Language. Most universities stopped teaching the language back in the 1980s. COBOL is considered a relic by younger coders.

“There’s really no good reason to learn COBOL today, and there was really no good reason to learn it 20 years ago,” says UCLA computer science professor Peter Reiher. “Most students today wouldn’t have ever even heard of COBOL.”

Meanwhile, because many banks, large companies, and government agencies still use the language in their legacy systems, there’s plenty of demand for COBOL programmers. A search for “COBOL Developer” returned 568 jobs on Indeed.com. COBOL developers make anywhere from $40 to more than $100 per hour.

Kansas governor Laura Kelley said the Kansas Department of Labor was in the process of migrating systems from COBOL to a newer language, but that the effort was postponed by the virus. New Jersey governor Phil Murphy wondered why such an old language was being used on vital state government systems, and classed it with the many weaknesses in government systems the virus has revealed.

The truth is, organizations often hesitate to change those old systems because they still work, and migrating to new systems is expensive. Massive upgrades also involve writing new code, which may contain bugs, Reiher says. In the worst-case scenario, bugs might cause the loss of customer financial data being moved from the old system to the new.
IT STILL WORKS (MOSTLY)
COBOL, though ancient, is still considered stable and reliable—at least under normal conditions.

The current glitches with state unemployment problems are “probably not a specific flaw in the COBOL language or in the underlying implementation,” Reiher says. “The problem is more likely that some states are asking their computer systems to work with data on a far higher scale, he said, and making the systems do things they’ve never been asked to do.”

COBOL was developed in the early 1960s by computer scientists from universities, mainframe manufacturers, the defense and banking industries, and government. Based on ideas developed by programming pioneer Grace Hopper, it was driven by the need for a language that could run on a variety of different kinds of mainframes.

“It was developed to do specific kinds of things like inventory and payroll and accounts receivable,” Reiher told me. “It was widely used in 1960s by a lot of banks and government agencies when they first started automating their systems.”

Here in the 21st century, COBOL is still quietly doing those kinds of things. Millions of lines of COBOL code still run on mainframes used in banks and a number of government agencies, including the Department of Veterans Affairs, Department of Justice, and Social Security Administration. A 2017 Reuters report said 43% of banking systems still use COBOL.

But the move to newer languages such as Java, C, and Python is making its way through industries of all sorts, and will eventually be used in new systems used by banks and government. One key reason for the migration is that mobile platforms use newer languages, and they rely on tight integration with underlying systems to work the way users expect.

The coronavirus will be a catalyst for a lot of changes in the coming years, some good, some bad. The migration away from the programming languages of another era may be one of the good ones.

https://www.fastcompany.com/90488862/what-is-cobol

My previous job as a system integrator has given me first hand experience on this issue. Most of the projects I handeld were migration from an old/obsolete system to a newer one (mostly DCS). The most obvious advantage of these projects is that we have a system that is still working. The challenge that we had was translating the source code of the old systems into the new one. When they couldn't be translated as one to one correspondence, we need to use process control narration as intermediary. Often times we couldn't get access to the source code due to the oldness of the system, missing parts of documentation such as hardcopy of ladder diagram, function block diagram, sequential function chart, proprietary scripts, or due to corrupted floppy disks. So we had to rely on additional information provided by the process operators and supervisors about how the system was supposed to work.
On the other hand, in new systems we don't have the source code, so we have to translate from the control narrations provided by the process engineers. There is no guarantee that the system will work as intended. Often times, we had to make tweaking, adjustments, even some major modifications during the project commissioning.
If only we had a universal modelling system/platform, we could save a lot of time and effort to finish the projects. The system migrations could then be done automatically.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2020 05:29:32
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.
Quote
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
https://pathmind.com/wiki/neural-network
Quote
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.

That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2020 05:52:53
This Is What Tesla's Autopilot Sees On The Road.

Essentially, it builds a virtual environment in its computer based on input data from visual cameras and radar. With more of autopilot cars on the road, a lot of data being processed become redundant. Sharing those data can be the next step to increase efficiency of the whole system. It will require agreed protocol, data structure, and algorithm to interpret them properly. This brings us one step closer to a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/08/2020 11:10:14
We are increasingly rely on artificial intelligence to make decisions. But we must be aware of the risk that it poses, like those described in the article below.
https://thegradient.pub/shortcuts-neural-networks-love-to-cheat/
Quote
Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions. Shortcut opportunities come in many flavors and are ubiquitous across datasets and application domains. A few examples are visualized here:
(https://thegradient.pub/content/images/2020/07/image5-5.png)
At a principal level, shortcut learning is not a novel phenomenon: variants are known under different terms such as “learning under covariate shift”, “anti-causal learning”, “dataset bias”, the “tank legend” and the “Clever Hans effect”. We here discuss how shortcut learning unifies many of deep learning’s problems and what we can do to better understand and mitigate shortcut learning.

What is a shortcut?

In machine learning, the solutions that a model can learn are constrained by data, model architecture, optimizer and objective function. However, these constraints often don’t just allow for one single solution: there are typically many different ways to solve a problem. Shortcuts are solutions that perform well on a typical test set but fail under different circumstances, revealing a mismatch with our intentions.
Quote
Shortcut learning beyond deep learning

Often such failures serve as examples for why machine learning algorithms are untrustworthy. However, biological learners suffer from very similar failure modes as well. In an experiment in a lab at the University of Oxford, researchers observed that rats learned to navigate a complex maze apparently based on subtle colour differences - very surprising given that the rat retina has only rudimentary machinery to support at best somewhat crude colour vision. Intensive investigation into this curious finding revealed that the rats had tricked the researchers: They did not use their visual system at all in the experiment and instead simply discriminated the colours by the odour of the colour paint used on the walls of the maze. Once smell was controlled for, the remarkable colour discrimination ability disappeared.

Animals often trick experimenters by solving an experimental paradigm (i.e., dataset) in an unintended way without using the underlying ability one is actually interested in. This highlights how incredibly difficult it can be for humans to imagine solving a tough challenge in any other way than the human way: Surely, at Marr’s implementational level there may be differences between rat and human colour discrimination. But at the algorithmic level there is often a tacit assumption that human-like performance implies human-like strategy (or algorithm). This “same strategy assumption” is paralleled by deep learning: even if DNN units are different from biological neurons, if DNNs successfully recognise objects it seems natural to assume that they are using object shape like humans do. As a consequence, we need to distinguish between performance on a dataset and acquiring an ability, and exercise great care before attributing high-level abilities like “object recognition” or “language understanding” to machines, since there is often a much simpler explanation:

Never attribute to high-level abilities that which can be adequately explained by shortcut learning.
Quote
The consequences of this behaviour are striking failures in generalization. Have a look at the figure below. On the left side there are a few directions in which humans would expect a model to generalize. A five is a five whether it is hand-drawn and black and white or a house number photographed in color. Similarly slight distortions or changes in pose, texture or background don’t influence our prediction about the main object in the image. In contrast a DNN can easily be fooled by all of them. Interestingly this does not mean that DNNs can’t generalize at all: In fact, they generalize perfectly well albeit in directions that hardly make sense to humans. The right side of the figure below shows some examples that range from the somewhat comprehensible - scrambling the image to keep only its texture - to the completely incomprehensible.
(https://thegradient.pub/content/images/2020/07/image1.png)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2020 10:42:06
This patent by Tesla is a clue that in the future, virtual universe will be built mostly autonomously by AI.
https://www.tesmanian.com/blogs/tesmanian-blog/tesla-published-a-patent-generating-ground-truth-for-machine-learning-from-time-series-elements
Quote
Deep learning systems used for applications such as autonomous driving are developed by training a machine learning model. Typically, the performance of the deep learning system is limited at least in part by the quality of the training set used to train the model.

In many instances, significant resources are invested in collecting, curating, and annotating the training data. Traditionally, much of the effort to curate a training data set is done manually by reviewing potential training data and properly labeling the features associated with the data.

The effort required to create a training set with accurate labels can be significant and is often tedious. Moreover, it is often difficult to collect and accurately label data that a machine learning model needs improvement on. Therefore, there exists a need to improve the process for generating training data with accurate labeled features.

Tesla published patent 'Generating ground truth for machine learning from time series elements'

Patent filing date: February 1, 2019
Patent Publication Date: August 6, 2020

(https://cdn.shopify.com/s/files/1/0173/8204/7844/files/1_660faf20-c36a-4f67-8e63-11c5d4078119_1024x1024.jpg?v=1596750740)

The patent disclosed a machine learning training technique for generating highly accurate machine learning results. Using data captured by sensors on a vehicle a training data set is created. The sensor data may capture vehicle lane lines, vehicle lanes, other vehicle traffic, obstacles, traffic control signs, etc.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2020 10:59:59
Here is a research article about The information catastrophe.
https://aip.scitation.org/doi/10.1063/5.0019941
Quote
Currently, we produce ∼1021 digital bits of information annually on Earth. Assuming a 20% annual growth rate, we estimate that after ∼350 years from now, the number of bits produced will exceed the number of all atoms on Earth, ∼1050. After ∼300 years, the power required to sustain this digital production will exceed 18.5 × 1015 W, i.e., the total planetary power consumption today, and after ∼500 years from now, the digital content will account for more than half Earth’s mass, according to the mass-energy–information equivalence principle. Besides the existing global challenges such as climate, environment, population, food, health, energy, and security, our estimates point to another singular event for our planet, called information catastrophe.

(https://aip.scitation.org/na101/home/literatum/publisher/aip/journals/content/adv/2020/adv.2020.10.issue-8/5.0019941/20200810/images/small/5.0019941.figures.online.f3.gif)
Quote
In conclusion, we established that the incredible growth of digital information production would reach a singularity point when there are more digital bits created than atoms on the planet. At the same time, the digital information production alone will consume most of the planetary power capacity, leading to ethical and environmental concerns already recognized by Floridi who introduced the concept of “infosphere” and considered challenges posed by our digital information society.27 These issues are valid, regardless of the future developments in data storage technologies. In terms of digital data, the mass–energy–information equivalence principle formulated in 2019 has not yet been verified experimentally, but assuming this is correct, then in not the very distant future, most of the planet’s mass will be made up of bits of information. Applying the law of conservation in conjunction with the mass–energy–information equivalence principle, it means that the mass of the planet is unchanged over time. However, our technological progress inverts radically the distribution of the Earth’s matter from predominantly ordinary matter to the fifth form of digital information matter. In this context, assuming the planetary power limitations are solved, one could envisage a future world mostly computer simulated and dominated by digital bits and computer code.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2020 10:48:14
In this article we can see that data compression and decompression play central role in learning and modelling, no matter if they're done by machines or biological entities.
https://www.zdnet.com/google-amp/article/what-is-gpt-3-everything-business-needs-to-know-about-openais-breakthrough-ai-language-program/
Quote
When the neural network is being developed, called the training phase, GPT-3 is fed millions and millions of samples of text and it converts words into what are called vectors, numeric representations. That is a form of data compression. The program then tries to unpack this compressed text back into a valid sentence. The task of compressing and decompressing develops the program's accuracy in calculating the conditional probability of words.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2020 08:49:38
https://www.businessinsider.com/developer-sharif-shameem-openai-gpt-3-debuild-2020-9
Quote
In July, Debuild cofounder and CEO Sharif Shameem tweeted about a project he created that allowed him to build a website simply by describing its design. 

In the text box, he typed, "the google logo, a search box, and 2 lightgrey buttons that say 'Search Google' and 'I'm Feeling Lucky." The program then generated a virtual copy of the Google homepage.


This program uses GPT-3, a "natural language generation" tool from research lab OpenAI, which was cofounded by Elon Musk. GPT-3 was trained on massive swathes of data and can spit our results that mimic human writing. Developers have used it for creative writing, designing websites, writing business memos, and more. Now, Shameem is using GPT-3 for Debuild, a no-code tool for building web apps just by describing what they look like and how they work.

With this program, the user just needs to type in and describe what the application will look like and how it will work, and the tool will create a website based on those descriptions.

https://syncedreview.com/2020/09/10/openai-gpt-f-delivers-sota-performance-in-automated-mathematical-theorem-proving/
Quote
San Francisco-based AI research laboratory OpenAI has added another member to its popular GPT (Generative Pre-trained Transformer) family. In a new paper, OpenAI researchers introduce GPT-f, an automated prover and proof assistant for the Metamath formalization language.

While artificial neural networks have made considerable advances in computer vision, natural language processing, robotics and so on, OpenAI believes they also have potential in the relatively underexplored area of reasoning tasks. The new research explores this potential by applying a transformer language model to automated theorem proving.

It seems like in the future we will become less dependent on biological computational resources (i.e. brain).
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/10/2020 02:49:58
With virtual universe, we will get less surprises, so we can make better plans to achieve our goals. It can help us improve our survival chance which is a prerequisite (i.e. an instrumental goal) to achieve the universal terminal goal.
Here is one of the latest progress we have made to get closer to that goals.
https://scitechdaily.com/esas-%CF%86-week-digital-twin-earth-quantum-computing-and-ai-take-center-stage/
Quote
The third edition of the Φ-week event, which is entirely virtual, focuses on how Earth observation can contribute to the concept of Digital Twin Earth – a dynamic, digital replica of our planet which accurately mimics Earth’s behavior. Constantly fed with Earth observation data, combined with in situ measurements and artificial intelligence, the Digital Twin Earth provides an accurate representation of the past, present, and future changes of our world.

Digital Twin Earth will help visualize, monitor, and forecast natural and human activity on the planet. The model will be able to monitor the health of the planet, perform simulations of Earth’s interconnected system with human behavior, and support the field of sustainable development, therefore, reinforcing Europe’s efforts for a better environment in order to respond to the urgent challenges and targets addressed by the Green Deal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 04:21:10
As I mentioned earlier in another thread, cost saving is a universal instrumental goal. It also applies in AI research.
https://syncedreview.com/2020/10/02/google-cambridge-deepmind-alan-turing-institutes-performer-transformer-slashes-compute-costs/
Quote
It’s no coincidence that Transformer neural network architecture is gaining popularity across so many machine learning research fields. Best known for natural language processing (NLP) tasks, Transformers not only enabled OpenAI’s 175 billion parameter language model GPT-3 to deliver SOTA performance, the power- and potential-packed architecture also helped DeepMind’s AlphaStar bot defeat professional StarCraft players. Researchers have now introduced a way to make Transformers more compute-efficient, scalable and accessible.

While previous learning approaches such as RNNs suffered from vanishing gradient problems, Transformers’ game-changing self-attention mechanism eliminated such issues. As explained in the paper introducing Transformers — Attention Is All You Need, the novel architecture is based on a trainable attention mechanism that identifies complex dependencies between input sequence elements.

Transformers however scale quadratically when the number of tokens in an input sequence increases, making their use prohibitively expensive for large numbers of tokens. Even when fed with moderate token inputs, Transformers’ gluttonous appetite for computational resources can be difficult for many researchers to satisfy.

A team from Google, University of Cambridge, DeepMind, and Alan Turing Institute have proposed a new type of Transformer dubbed Performer, based on a Fast Attention Via positive Orthogonal Random features (FAVOR+) backbone mechanism. The team designed Performer to be “capable of provably accurate and practical estimation of regular (softmax) full rank attention, but of only linear space and timely complexity and not relying on any priors such as sparsity or low-rankness.”
Title: Re: How close are we from building a virtual universe?
Post by: mikahawkins on 12/10/2020 05:55:08
Are we trying to visualize something with lifeforms or without lifeforms ? I believe we can start off with one step at a time, first getting the solar system together then the galaxies and so on.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 07:29:44
Are we trying to visualize something with lifeforms or without lifeforms ? I believe we can start off with one step at a time, first getting the solar system together then the galaxies and so on.
It's a universal inevitability that in order to achieve the universal terminal goal, a conscious system will have to build some kind of virtual universe as close as possible to the real/objective reality in the universe, which can be described in terms of accuracy and precision. Due to limited resources, according to Pareto principle, we must spend more resources to things which have more impacts to the achievement of the universal terminal goal. That's why Google map has higher resolution for areas with high interest such as big cities compared to deserts or oceans.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/10/2020 15:03:57
Talking about lifeform, how do you  define it? Would you call Henrietta Lack's tumor cells alive? What about corona virus? prion? Alexa?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/10/2020 12:42:09
A few decades ago, most process equipments were dumb. They need periodic maintenance performed by humans to diagnose their functional condition and find abnormalities. So basically their conditions are uncertain until they're broken or maintenance personnels check/test them. Control loops need to be periodically fine tuned to keep them in best performance due to physical changes in the field instrumentations and the process itself.
Now a lot of equipments are getting smart. Smart transmitters and positioners has been widely used. There are also smart variable speed drive and other equipment controllers. They have self diagnostic feature to tell techinicians wether or not they are in a good condition, and point out abnormalities so the problem can be fixed sooner. Those diagnostic data can be continuously monitored from a remote area. Those smart equipments can be considered to have some form of self awareness.
In a SCADA system, a bot can be deployed to continuously monitor functionality of each control loop. Thousand of them can run in the same server. This forces us to review traditional concept of individuality, especially regarding those conscious agents.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/10/2020 12:51:35
Here is an interesting article covering AGI.
https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/

Quote
  The tricky part comes next: yoking multiple abilities together. Deep learning is the most general approach we have, in that one deep-learning algorithm can be used to learn more than one task. AlphaZero used the same algorithm to learn Go, shogi (a chess-like game from Japan), and chess. DeepMind’s Atari57 system used the same algorithm to master every Atari video game. But the AIs can still learn only one thing at a time. Having mastered chess, AlphaZero has to wipe its memory and learn shogi from scratch.

Legg refers to this type of generality as “one-algorithm,” versus the “one-brain” generality humans have. One-algorithm generality is very useful but not as interesting as the one-brain kind, he says: “You and I don’t need to switch brains; we don’t put our chess brains in to play a game of chess.” 

Here are the steps toward development of AGI.
Quote
   Roughly in order of maturity, they are:
Unsupervised or self-supervised learning. Labeling data sets (e.g., tagging all pictures of cats with “cat”) to tell AIs what they’re looking at during training is the key to what’s known as supervised learning. It’s still largely done by hand and is a major bottleneck. AI needs to be able to teach itself without human guidance—e.g., looking at pictures of cats and dogs and learning to tell them apart without help, or spotting anomalies in financial transactions without having previous examples flagged by a human. This, known as unsupervised learning, is now becoming more common.

Transfer learning, including few-shot learning. Most deep-learning models today can be trained to do only one thing at a time. Transfer learning aims to let AIs transfer some parts of their training for one task, such as playing chess, to another, such as playing Go. This is how humans learn.

Common sense and causal inference. It would be easier to transfer training between tasks if an AI had a bedrock of common sense to start from. And a key part of common sense is understanding cause and effect. Giving common sense to AIs is a hot research topic at the moment, with approaches ranging from encoding simple rules into a neural network to constraining the possible predictions that an AI can make. But work is still in its early stages.

Learning optimizers. These are tools that can be used to shape the way AIs learn, guiding them to train more efficiently. Recent work shows that these tools can be trained themselves—in effect, meaning one AI is used to train others. This could be a tiny step toward self-improving AI, an AGI goal.




Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/10/2020 06:40:12
Here I found a very interesting video about neural network revolution that I'd like to share here.
Quote
Geoffrey Hinton is an Engineering Fellow at Google where he manages the Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. He is also the Chief Scientific Adviser of the new Vector Institute and an Emeritus Professor at the University of Toronto.

Recorded: December 4th, 2017
I see this neural network revolution as a continuation of neural network evolution that has been happening for hundreds of million years and produced brains which kickstarted the revolution.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/11/2020 04:11:48
The risk of using GPT irresponsibly is self confirmation bias which may obstruct from getting optimum results.

https://twitter.com/karpathy/status/1284660899198820352?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1284667692381872128%7Ctwgr%5Eshare_3%2Ccontainerclick_1&ref_url=https%3A%2F%2Fwww.technologyreview.com%2F2020%2F07%2F20%2F1005454%2Fopenai-machine-learning-language-generator-gpt-3-nlp%2F

Quote
Andrej Karpathy
@karpathy
·
Jul 19
By posting GPT generated text we’re polluting the data for its future versions
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2020 04:23:51
The dog's behavior is not entirely surprising either. Especially if you have some future version of neuralink implanted on its head, or you are a veterinarian.

Here is the definition of intelligence accorsing to dictionary.
Quote
  the ability to acquire and apply knowledge and skills.
Usually, it represents problem solving or information processing capability, but doesn't take into account the ability to manipulate its environment nor self awareness.
AlphaGo is considered intelligent since it can solve problem of playing go better then human champion. Alpha zero is even more intelligent since it can beat Alpha Go 100:0.
Even though they don't have the ability to move any piece of go.
On the other hand, consciousness covers more factors into account. For example, if you got paralyzed so you can't move your arms and legs, you are considered less conscious than your normal state, even though you can still think clearly.
Traditionally, an agent is considered intelligent if it can solve problem, especially when it's better than expectation. A dog who can get you newspaper is considered intelligent.

https://en.wikipedia.org/wiki/Artificial_intelligence
Quote
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]
Quote
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[70]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[3] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Perform actions mathematically similar to ones that succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[71] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[72] Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[73]

https://en.wikipedia.org/wiki/AI_effect
Quote
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.[1]

Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]
Quote
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.

Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."[4]

When IBM's chess playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence.[5] Fred Reed writes:

"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."[6]

Douglas Hofstadter expresses the AI effect concisely by quoting Larry Tesler's Theorem:

"AI is whatever hasn't been done yet."[7]

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as human-assisted Turing machine.[8]

AI applications become mainstream
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[9]

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[10]

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[11]

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[12]
Quote
Saving a place for humanity at the top of the chain of being
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[14] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[citation needed]

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[15]




I'd like to delve technically deeper into the problem in this thread.
Quote
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11]

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[12][13] followed by disappointment and the loss of funding (known as an "AI winter"),[14][15] followed by new approaches, success and renewed funding.[13][16] After AlphaGo successfully defeated a professional Go player in 2015, artificial intelligence once again attracted widespread global attention.[17] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[18] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[19] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[22][23][24] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[18]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[19] General intelligence is among the field's long-term goals.[25] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[26] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[31] Some people also consider AI to be a danger to humanity if it progresses unabated.[32][33] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[34]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[35][16]
Quote
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[71]
Quote
Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world.[citation needed] These learners could therefore derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is seldom possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering a broad range of possibilities unlikely to be beneficial.
https://en.wikipedia.org/wiki/Artificial_intelligence

Intelligent agents are expected to have the ability to learn from raw data. It means that they have tools to pre-process those raw data to filter out noises or flukes and extract useful information. When those agents interact with one another, especially when they must compete for finite resources, the more important is the ability to filter out misinformation. It requires an algorithm to determine if some data inputs are believable or not. At this point we are seeing that artificial intelligence is getting closer to natural intelligence. This exhibits a feature similar to critical thinking of conscious beings.
Descartes has pointed out that the only self evident information a conscious agent can get is its own existence. Any other information requires corroborating evidences to support it. So in the end, the reliability of an information will be measured/valued by its ability to help preserving conscious agents.


Quote
In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer.[1]

When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors.[1] After this learning step, a DBN can be further trained with supervision to perform classification.[2]

DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs)[1] or autoencoders,[3] where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set).

The observation[2] that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms.[4]:6 Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography,[5] drug discovery[6][7][8]).
https://en.wikipedia.org/wiki/Deep_belief_network
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2020 04:40:29
The video shows what the future will look like. It's a step closer toward building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: Xeon on 12/11/2020 10:23:13
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.


Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.
What law says that our memories are stored in our minds ! How do we know that we are not just accessing a mainframe server and we are no more than confused bots .
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/11/2020 09:28:18
What law says that our memories are stored in our minds ! How do we know that we are not just accessing a mainframe server and we are no more than confused bots .

There are no such law AFAIK. But here is what we know.
Descartes has pointed out that the only self evident information a conscious agent can get is its own existence. Any other information requires corroborating evidences to support it. So in the end, the reliability of an information will be measured/valued by its ability to help preserving conscious agents.
If two or more hypotheses are equally capable of explaining observations, Occam's razor suggests us to choose the simplest one. I've asserted in another thread that efficiency is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/11/2020 06:26:10
Brains contain some compressed and partial version of virtual universe in the form of neurons and neural connection states.  Object counting is a part of extracting information from raw data coming in through sensory organs. This video tells us how brains count.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2020 11:19:34
Here is a recent progress toward building a virtual universe.
https://singularityhub.com/2020/11/22/the-trillion-transistor-chip-that-just-left-a-supercomputer-in-the-dust/
Quote
The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”
Quote
Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.
Quote
Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 23/11/2020 12:30:24
The problem throughout is that you are trying to define the solution without defining the problem. My artificial horizon is an adequate virtual universe if the problem is to keep the plane flying straight and level with no visual reference. The GPS moving map adds just enough data if I want to get somewhere, and the ILS gives me a virtual beeline to the runway threshold. Each of these solutions began with a clear statement of the problem.

The joy of full autopilot was demonstrated by a couple of 737 fatal incidents in recent memory. It's OK until it goes wrong and crashes you precisely on the runway centerline, unlike the human who is generally "good enough" to land somewhere (like the middle of the Hudson river) without breaking too much. I've just completed a paper exercise where the radio died in fog at night. The automatic answer is to follow a published instrument approach on enhanced GPS and autopilot, which will take you to your destination within +/- a couple of feet. Problem is that you don't know who else is on that track, so the more closely you follow it, the more likely you are to collide or cause panic. The human  answer is to assume that everyone else is on track and avoid it by a mile laterally and 1000 ft vertically until the last possible moment.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2020 15:43:50
The problem throughout is that you are trying to define the solution without defining the problem. My artificial horizon is an adequate virtual universe if the problem is to keep the plane flying straight and level with no visual reference. The GPS moving map adds just enough data if I want to get somewhere, and the ILS gives me a virtual beeline to the runway threshold. Each of these solutions began with a clear statement of the problem.



This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I've stated the problem in another thread, which is to reduce the risk of existential threat to conscious beings down to zero. Building an accurate and precise virtual universe is one method to achieve that goal by reducing uncertainty and helping to make decisions effectively and efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2020 12:04:55
By building a virtual universe, we can make trial and error more efficiently. That's essentially what AlphaGo and AlphaZero have done to master the game. 

Quote
Learn from the mistakes of others, you can never live long enough to make them all yourself.
Groucho Marx   
Having an accurate and precise virtual universe, vaccine to a newly emerging virus could be developed in minutes instead of years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2020 15:05:38
AGI is the last invention humans ever need to make.
https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html
Quote
   Meet GPT-3. It Has Learned To Code (and Blog and Argue).
The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs.
Quote
  This summer, an artificial intelligence lab in San Francisco called OpenAI unveiled a technology several months in the making. This new system, GPT-3, had spent those months learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.

Mckay Wrigley, a 23-year-old computer programmer from Salt Lake City, was one of the few invited to tinker with the system, which uses everything it has learned from that vast sea of digital text to generate new language on its own. Mr. Wrigley wondered if it could imitate public figures — write like them, perhaps even chat like them. 
Quote
  One of his experiments involved a pop psychologist, Scott Barry Kaufman. The system took in Mr. Kaufman’s name and a topic for discussion: creativity. Then, when asked “How do we become more creative?” GPT-3 responded instantly:

I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.

Later, when Mr. Wrigley posted the paragraph on Twitter, somebody looped in the real Scott Barry Kaufman. He was stunned. “It definitely sounds like something I would say,” the real Mr. Kaufman tweeted, later adding, “Crazy accurate A.I.” 
Rapid advancement of AI with its exponential growth nature seems to hint that singularity is near.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2020 03:27:16
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

Virtual universe that I described previously is similar to knowledge graphs as described in the article below.

https://www.zdnet.com/article/rebooting-ai-deep-learning-meet-knowledge-graphs/
Quote
Rebooting AI: Deep learning, meet knowledge graphs
Gary Marcus, a prominent figure in AI, is on a mission to instill a breath of fresh air to a discipline he sees as in danger of stagnating. Knowledge graphs, the 20-year old hype, may have something to offer there.

"This is what we need to do. It's not popular right now, but this is why the stuff that is popular isn't working." That's a gross oversimplification of what scientist, best-selling author, and entrepreneur Gary Marcus has been saying for a number of years now, but at least it's one made by himself.

The "popular stuff which is not working" part refers to deep learning, and the "what we need to do" part refers to a more holistic approach to AI. Marcus is not short of ambition; he is set on nothing else but rebooting AI. He is not short of qualifications either. He has been working on figuring out the nature of intelligence, artificial or otherwise, more or less since his childhood.

Questioning deep learning may sound controversial, considering deep learning is seen as the most successful sub-domain in AI at the moment. Marcus on his part has been consistent in his critique. He has published work that highlights how deep learning fails, exemplified by language models such as GPT-2, Meena, and GPT-3.
Quote
Deep learning, meet knowledge graphs
When asked if he thinks knowledge graphs can have a role in the hybrid approach he advocates for, Marcus was positive. One way to think about it, he said, is that there is an enormous amount of knowledge that's represented on the Internet that's available essentially for free, and is not being leveraged by current AI systems. However, much of that knowledge is problematic:

"Most of the world's knowledge is imperfect in some way or another. But there's an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that.

Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you're a deep learning system, you can't get anything out of that at all, or hardly anything.

Wikipedia is the stuff that's on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There's all kinds of knowledge there, too. It's also being left on the floor by current approaches.

The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff.

We're going to have to get to AI systems that can use the collective human knowledge that's expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems."
(https://zdnet3.cbsistatic.com/hub/i/2017/05/01/ce8926a1-9a41-42b6-9bd0-92df1b4171f6/deeplearningiconsr5png-jpg.png)
There is more to AI than Machine Learning, and there is more to Machine Learning than deep learning. Gary Marcus is arguing for a hybrid approach to AI, reconnecting it with its roots. Image: Nvidia
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/12/2020 22:44:55
Compared to my other threads discussing the universal terminal goal, this one seems to be underdeveloped. To complement my own thought, I'll just drop some important latest research in the field of artificial intelligence, just like this one.
Inductive Biases for Deep Learning of Higher-Level Cognition
Anirudh Goyal, Yoshua Bengio
Quote
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.   
https://arxiv.org/abs/2011.15091?s=03
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/12/2020 09:22:25
Here is another great video covering current development of AI.
Timestamps for this video:
00:00 Introduction
00:45 Humanity's Next Chapter
03:38 Cathie Wood Discusses AlphaFold
08:40 Elon Musk's Dire Warning
10:20 Netflix Recommends Your Doom
14:09 Detecting Cats and Dogs
15:45 ARK's James Wang on Deep Learning
17:09 The Singularity is Near
Title: Re: How close are we from building a virtual universe?
Post by: syhprum on 07/12/2020 13:23:57
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
There is a perfectly good abbreviation to indicate that you mean to the power on all the keyboards that I have used "^" but maybe the articles are written on a pocket device that lacks this abbreviation . 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 02:45:13
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
There is a perfectly good abbreviation to indicate that you mean to the power on all the keyboards that I have used "^" but maybe the articles are written on a pocket device that lacks this abbreviation . 
Perhaps the simplest explanation is typo. The key wasn't pressed hard enough to be sensed by the keyboard. There is no autocorrect for this kind of error that I know of.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 03:01:27
This article just came into my mailbox, and I'd like to share it here since it's closely related to the topic.
Quote
NEWSLETTER ON LINKEDIN
Artificial Intelligence (AI)
 By Bernard Marr

Open this article on LinkedIn to see what people are saying about this topic. Open on LinkedIn

Future Trends And Technology – Insights from Ericsson
 
Innovation and new thought is what makes the world go round. Behind all the ground-breaking technologies such as AI and automation are human minds that are willing to push boundaries and think differently about solving problems, in both business and society.
Investing in true innovation – how to use technology to do different things, as opposed to just doing things differently – has led to sweeping changes in how we communicate, work together, play and look after our health in recent years. In particular, it has allowed businesses and organizations to get closer to their most important asset – the people who use or consume their services – than ever before. This is thanks to the ever-smarter ways in which we are capturing data and using it to overcome challenges, from understanding customer behavior to creating vaccines.
I was fortunate enough to get the chance to talk to two people who are working on this cutting-edge – Jasmeet Sethi and Cristina Pandrea, of Ericsson's ConsumerLab. This is the division within Ericsson responsible for research into current and emerging trends – with a specific focus on how they are being used in the real world today, and what that might mean for tomorrow.
During our conversation, we touched on five key trends that have been identified by the ConsumerLab, which has been collecting and analyzing data on how people interact with technology for more than 20 years. One thing they all have in common is that every one of them has come into its own during the current global pandemic. This is usually for one of two reasons – either because necessity has driven a rapid increase in the pace of adoption, or because they provide a new approach to tackling problems society is currently facing.
Let's look at each of the five trends in turn.
1. Resilient networks
In 2020, more than ever before, we've been dependant on the stability and security of IT systems and networks to keep the world running. As well as the importance of uptime and core stability when it comes to allowing businesses to switch to work-from-home models, It's been shown that cyber attacks have increased dramatically during the pandemic, meaning security is more vital than ever before.
Many of the international efforts to trace the spread of the disease, understand people's behavior in pandemic situations, and to develop vaccines and cures are dependent on the transfer of huge volumes of digital data. Ericsson believes that the amount of data transferred has increased by 40% over mobile networks and 70% over wired broadband networks since the start of the pandemic. So ensuring that infrastructure is reliable and secure has never been so important. The fact that network operators have largely been successful at this hasn't gone unnoticed, Sethi tells me – with customers thanking them with a noticeably higher level of loyalty.
2. Tele-health
Medical consultation, check-ups, examinations, and even diagnoses were increasingly being carried out remotely, even pre-covid, particularly in remote regions or areas where there is a shortage of clinical staff. However, during 2019 they made up just 19% of US healthcare contacts. Ericsson's research has shown that this increased to around 46% during 2020. This is clearly an example of a trend where the pandemic accelerated a change that was already happening. So it's likely that providers will be keen to carry on receiving the benefits they've generated, as we eventually move into a post-covid world.
Here a key challenge comes from the fact that a number of different technologies need to be working together in harmony to ensure patient care doesn't suffer, from video streaming to cloud application platforms and network security protocols. 
3. Borderless workplaces
We saw the impossible happen in 2020 as thousands of organizations mobilized to make remote working possible for their workforces in a very short period of time. But this trend goes beyond "eternal WFH" and points to a future where we have greater flexibility and freedom over where we spend our working hours. Collaborative workplace tools like Zoom and Slack meant the switchover was often relatively hassle-free, and next-generation tools will cater for a future where employees can carry out their duties from anywhere, rather than just stuck at their kitchen tables.
But this shift in social norms brings other problems, such as the danger of isolation, the difficulty between striking a balance between home and work life, or a diminished ability to build a culture within an organization. Solutions in this field look to tackle these challenges, too, rather than simply give us more ways to be connected to the office 24/7.
4. The Experience / Immersive Economy
Touching on issues raised by the previous trend, Ericsson has experimented with providing employees with virtual reality headsets, to make collaborative working more immersive. Pandrea described the benefits of this to me – "The experience was really genuine, it took us by surprise … we'd seen virtual reality before, but this was the first time where we saw 25 people in the same virtual room, having this experience … when you see the others as avatars you get the feeling of being together, it makes a world of difference."
This trend involves creating experiences that mean as little as possible is lost when you move an interaction or event from the real world to the virtual world. Virtual and augmented reality have an important role here, but Sethi points beyond this to an idea he calls the "internet of senses," where devices can feed information to us through all of our five senses. Breakthrough technologies such as the Teslasuit use haptic feedback to greatly increase the feeling of presence in virtual spaces, and is used by NASA to train astronauts. Other innovators in this field are working on including our sense of smell, by dispensing fragrances from headset attachments.
Another interesting change related to this field that's been predicted is the rise in the value put on virtual commodities and status versus material goods. Children these days are just as likely to talk boastfully about a rare Fortnite skin, Rocket League car, or Roblox pet as they would about any physical product or status symbol. "If you look at young millionaires they're already driven by virtual status – who has the best status in esports, the number of followers … this trend will be accelerated as we move into the virtual experience economy", Sethi predicts.
5. Autonomous Commerce
Two massive changes to the way we live our lives due to the pandemic have been a big acceleration in the uptake of online retail, and a move away from cash towards contactless payment methods. Cashiers were already being replaced by self-checkouts at a rapid pace pre-2020. But the pickup in speed this year brings us to a point where KFC is operating fully autonomous mobile food trucks in Shanghai. The trucks pilot themselves to customers and serve up socially-distanced meals with no human involvement.
The rush to keep up with changing consumer behavior has also sped up the adoption of cash-free and contactless retail, particularly in emerging markets where cash has traditionally been king. Financial services businesses tapping into technology like 5G networking and AI-powered fraud detection tools are responding to new expectations from customers in this field and, if they are able to predict that behavior accurately, are likely to see strong growth in coming years.
Investing in innovation
Remaining on the cutting-edge of these trends means investing strategically in new ideas and innovation. So we also talked about Ericsson's Startup 5g program, which Pandrea heads up. Here the business looks to be at the head of the pack when it comes to creating the $31 trillion in revenue that it predicts will be generated by 5G platforms and services before 2030.
Pandrea tells me that it is expected that a lot of this will come from services that telcos can bundle with their 5G offerings to help make their customers' lives better. One of the star players is XR Space, which is building a social VR platform using its own hardware that could effectively allow workers to take their office (and entertainment world) with them anywhere they go.
Another is London-based Inception XR, that enables AR experiences to be created from books to help create more immersion and gamification in children's education.
And a third that Pandrea recommends keeping an eye on for a glimpse of the future is PlaySight. It uses AI-powered 360-degree 8k cameras at sports or entertainment events, capable of capturing the action in greater detail than ever before. That data can then be delivered to an audience in any number of ways, including putting them inside VR experiences that let them view from any angle as well as pause and rewind what they are seeing.
Underlying technologies
Clearly, we can see the common threads of broader tech trends that run through these very relevant trends Ericsson is identifying today. AI technologies, as well as extended reality (XR), which includes VR, AR, and mixed reality (MR), are behind the tools that secure our networks, enable us to work efficiently from anywhere, receive remote healthcare, create immersive experiences and conduct autonomous commerce. High-speed networking is essential to every one of them too, and the quantum leap in upload and download speeds of 5G is necessary to make them all possible.
And it's certainly also true that much of the technological progress that is driving real change in business, commerce, society and entertainment has happened in response to the dark times we are living through. But as we start to cautiously look ahead to hopefully brighter days, these trends will go on to play a part in building a safer, smarter and more convenient future. 

To learn more about any of the trends we've covered here, you can watch our conversation in full here. And you can also take part in the Ericsson Unboxed virtual event that will take place on Wednesday, December 9th. Register or find out more here.

Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. I have also written a new book about AI, click here for more information. To read my future posts simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Slideshare or YouTube.
About Bernard Marr
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things.
LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his 1.5 million social media followers and shares content that reaches millions of readers.
Join the conversation



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2020 03:22:20
These videos explain about new technology adoption.

What's the S adoption curve mean for disruptive technology like Tesla? What about a double S curve?!
Quote
Wherein Dr. Know-it-all explains what an "S" adoption curve is, how it has functioned historically for technology like automobiles/cars, the internet, cell phones, and even smart phones. And how it matters a great deal for Tesla and other EV companies who are currently disrupting internal combustion engine (ICE) car manufacturers. Also, what happens when the EV adoption curve lines up with the full self driving (FSD) adoption curve?? Watch and find out!
Quote
By the by, as folks have pointed out, and I probably should've noted in the video itself, Tony Seba has been talking about "the tipping point" for years. While I was inspired to work up this video from a Patreon patron, and I don't closely follow Seba, I should have acknowledged that a lot of this is derived from Tony's brilliant ideas over the years. One such video is here:
Tony Seba's conclusion in the end of his video that technological disruption will happen for mainly economic reason, and not necessarily due to interference by government, is aligned with my idea that efficiency is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 04:16:36
Currently, one of the most rapid adoption of some form of virtual universe is in the field of self driving cars. These videos explained it well.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 04:21:23
Tesla's Dojo is clearly aligned with the goal stated in this thread. The next progress is clearly to generalize it so it can be applied in more kinds of problems.
An earlier effort that I've tried was Microsoft Flight Simulator. Perhaps it was used by 911 perpetrators. That's why I think that morality problem of ai users need to be solved objectively, which I discuss in another thread.
With more powerful AI, and more accurate and precise virtual universe, the user's goal can be achieved more easily, including harmful ones. A universal terminal goal is then necessary to distinguish between good and bad goals or intentions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2020 06:03:38
From this short video we can infer that an accurate virtual universe can increase efficiency and reduce cost.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/12/2020 06:41:00
I'd like to share a report from a software company which converges with my idea presented here. Extrapolated further, we will eventually have to deal with a universal terminal goal, which I discussed in another thread.
 
Top 8 trends shaping digital transformation in 2021
Quote
IT’s role is more critical than ever in a world that’s increasingly dependent on digital. Organizations
are under increasing pressure to stay competitive and create connected experiences. According to our
Connectivity benchmark report, IT projects are projected to grow by 40%; and 82% of businesses are now
holding their IT teams accountable for delivering connected customer experiences.
To meet these rising demands, organizations are accelerating their digital transformation — which can be
defined as the wholesale move to thinking about how to digitize every part of the business, as every part
of the business now needs technology to operate. In order to drive scale and efficiency, IT must rethink its
operating model to deliver self-serve capabilities and enable innovation across the enterprise.
In this report, we will highlight some of the top trends facing CIOs, IT leaders, and organizations in their digital
transformation journey, sourcing data from both MuleSoft proprietary research and third-party findings.
Quote
The future of automation: declarative programming
Uri Sarid,
CTO, MuleSoft
“The mounting complexity brought on by an explosion
of co-dependent systems, dynamic data, and rising
expectations demands a new approach to software. More
is expected for software to just work automatically, and
more of us expect automation of our digital life and work.
In 2021, we’ll see more and more systems be intent-based,
and see a new programming model take hold: a declarative
one. In this model, we declare an intent — a desired goal or
end state — and the software systems connected via APIs
in an application network autonomously figure out how to
simply make it so.”

Quote
2021 will be the year that data separates organizations
from their competitors... and customers
Lindsey Irvine,
CMO, MuleSoft
“The reality is that the majority of businesses today, across all industries,
aren’t able to deliver truly connected experiences for their customers,
partners, and employees — and that’s because delivering connected
experiences requires a lot of data, which lives in an average of 900
different systems and applications across the enterprise. Integrating and
unifying data across these systems is critical to create a single view of the
customer and achieve true digital transformation.
“It’s also the number one reason digital transformation initiatives fail. As
the amount of systems and applications continue to grow exponentially,
teams realize that key to their success — and their organization’s success —
is unlocking the data, wherever it exists, in a way that helps them deliver
value faster.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2020 03:09:19
Building an accurate and precise virtual universe requires a sound and robust scientific method.

https://physicsworld.com/a/madness-in-the-method-why-your-notions-of-how-science-works-are-probably-wrong/

Quote
You know what the scientific method is until you try to define it: it’s a set of rules that scientists adopt to obtain a special kind of knowledge. The list is orderly, teachable and straightforward, at least in principle. But once you start spelling out the rules, you realize that they really don’t capture how scientists work, which is a lot messier. In fact, the rules exclude much of what you’d call science, and includes even more of what you don’t. You even begin to wonder why anyone thought it necessary to specify a “scientific method” at all.

In his new book The Scientific Method: an Evolution of Thinking from Darwin to Dewey, the University of Michigan historian Henry Cowles explains why some people thought it necessary to define “scientific method” in the first place. Once upon a time, he writes, science meant something like knowledge itself – the facts we discover about the world rather than the sometimes unruly way we got them. Over time, however, science came to mean a particular stepwise way that we obtain those facts independent of the humans who follow the method, and independent of the facts themselves.
Quote
Just as nature takes alternative forms of life and selects among them, Darwin argued, so scientists take hypotheses and choose the most robust. Nature has its own “method”, and humans acquire knowledge in an analogous way. Darwin’s scientific work on living creatures is indeed rigorous, as I think contemporary readers will agree, but in the lens of our notions of scientific method it was hopelessly anecdotal, psychological and disorganized. He was, after all, less focused on justifying his beliefs than on understanding nature.
Quote
Following Darwin, the American “pragmatists” – 19th-century philosophers such as Charles Peirce and William James – developed more refined accounts of the scientific method that meshed with their philosophical concerns. For Peirce and James, beliefs were not mental judgements or acts of faith, but habits that individuals develop through long experience. Beliefs are principles of action that are constantly tested against the world, reshaped and tested again, in an endless process. The scientific method is simply a careful characterization of this process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/12/2020 12:52:41
Here is another video describing latest progress in building a virtual universe. This time it's abot microscopic universe, but extremely important for living organisms.
Quote
This is Biology's AlexNet moment! DeepMind solves a 50-year old problem in Protein Folding Prediction. AlphaFold 2 improves over DeepMind's 2018 AlphaFold system with a new architecture and massively outperforms all competition. In this Video, we take a look at how AlphaFold 1 works and what we can gather about AlphaFold 2 from the little information that's out there.

OUTLINE:
0:00 - Intro & Overview
3:10 - Proteins & Protein Folding
14:20 - AlphaFold 1 Overview
18:20 - Optimizing a differentiable geometric model at inference
25:40 - Learning the Spatial Graph Distance Matrix
31:20 - Multiple Sequence Alignment of Evolutionarily Similar Sequences
39:40 - Distance Matrix Output Results
43:45 - Guessing AlphaFold 2 (it's Transformers)
53:30 - Conclusion & Comments

AlphaFold 2 Blog: https://deepmind.com/blog/article/alp...
AlphaFold 1 Blog: https://deepmind.com/blog/article/Alp...
AlphaFold 1 Paper: https://www.nature.com/articles/s4158...
MSA Reference: https://arxiv.org/abs/1211.1281
CASP14 Challenge: https://predictioncenter.org/casp14/i...
CASP14 Result Bar Chart: https://www.predictioncenter.org/casp...

Paper Title: High Accuracy Protein Structure Prediction Using Deep Learning

Abstract:
Proteins are essential to life, supporting practically all its functions. They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). This breakthrough demonstrates the impact AI can have on scientific discovery and its potential to dramatically accelerate progress in some of the most fundamental fields that explain and shape our world.

Authors: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Kathryn Tunyasuvunakool, Olaf Ronneberger, Russ Bates, Augustin Žídek, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Anna Potapenko, Andrew J Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Martin Steinegger, Michalina Pacholska, David Silver, Oriol Vinyals, Andrew W Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2020 08:30:14
Here is the newest progress toward generalization of artifiicial intelligence be DeepMind.
https://www.nature.com/articles/s41586-020-03051-4
Mastering Atari, Go, chess and shogi by planning with a learned model
Quote
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess1 and Go2, where a perfect simulator is available. However, in real-world problems, the dynamics governing the environment are often complex and unknown. Here we present the MuZero algorithm, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. The MuZero algorithm learns an iterable model that produces predictions relevant to planning: the action-selection policy, the value function and the reward. When evaluated on 57 different Atari games3—the canonical video game environment for testing artificial intelligence techniques, in which model-based planning approaches have historically struggled4—the MuZero algorithm achieved state-of-the-art performance. When evaluated on Go, chess and shogi—canonical environments for high-performance planning—the MuZero algorithm matched, without any knowledge of the game dynamics, the superhuman performance of the AlphaZero algorithm5 that was supplied with the rules of the game.
Quote
MuZero is trained only on data generated by MuZero itself; no external data were used to produce the results presented in the article. Data for all figures and tables presented are available in JSON format in the Supplementary Information.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2020 09:22:45
The Great Google Crash: The World’s Dependency Revealed

We long for the day when nobody runs anything.
Todd Underwood - Google SRE
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/12/2020 03:56:49
Sooner or later, people will realize that we are on progress of building an accurate virtual universe. Unless of course, if we got extinct beforehand.
https://twitter.com/elonmusk/status/1343002225916841985?s=03
Quote
Vaccines are just the start. It's also capable in theory of curing almost anything. Turns medicine into a software & simulation problem.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/12/2020 04:02:56
Here is the article Elon Musk was tweeting about.
https://berthub.eu/articles/posts/reverse-engineering-source-code-of-the-biontech-pfizer-vaccine/
Quote
Welcome! In this post, we’ll be taking a character-by-character look at the source code of the BioNTech/Pfizer SARS-CoV-2 mRNA vaccine.
Now, these words may be somewhat jarring - the vaccine is a liquid that gets injected in your arm. How can we talk about source code?

This is a good question, so let’s start off with a small part of the very source code of the BioNTech/Pfizer vaccine, also known as BNT162b2, also known as Tozinameran also known as Comirnaty.

(https://berthub.eu/articles/bnt162b2.png)
First 500 characters of the BNT162b2 mRNA. Source: World Health Organization

The BNT162b mRNA vaccine has this digital code at its heart. It is 4284 characters long, so it would fit in a bunch of tweets. At the very beginning of the vaccine production process, someone uploaded this code to a DNA printer (yes), which then converted the bytes on disk to actual DNA molecules.
(https://berthub.eu/articles/bioxp-3200.jpg)
A Codex DNA BioXp 3200 DNA printer

Out of such a machine come tiny amounts of DNA, which after a lot of biological and chemical processing end up as RNA (more about which later) in the vaccine vial. A 30 microgram dose turns out to actually contain 30 micrograms of RNA. In addition, there is a clever lipid (fatty) packaging system that gets the mRNA into our cells.

RNA is the volatile ‘working memory’ version of DNA. DNA is like the flash drive storage of biology. DNA is very durable, internally redundant and very reliable. But much like computers do not execute code directly from a flash drive, before something happens, code gets copied to a faster, more versatile yet far more fragile system.

For computers, this is RAM, for biology it is RNA. The resemblance is striking. Unlike flash memory, RAM degrades very quickly unless lovingly tended to. The reason the Pfizer/BioNTech mRNA vaccine must be stored in the deepest of deep freezers is the same: RNA is a fragile flower.

Each RNA character weighs on the order of 0.53·10⁻²¹ grams, meaning there are 6·10¹⁶ characters in a single 30 microgram vaccine dose. Expressed in bytes, this is around 25 petabytes, although it must be said this consists of around 2000 billion repetitions of the same 4284 characters. The actual informational content of the vaccine is just over a kilobyte. SARS-CoV-2 itself weighs in at around 7.5 kilobytes.
And the summary is below.
Quote
Summarising
With this, we now know the exact mRNA contents of the BNT162b2 vaccine, and for most parts we understand why they are there:

- The CAP to make sure the RNA looks like regular mRNA
- A known successful and optimized 5’ untranslated region (UTR)
- A codon optimized signal peptide to send the Spike protein to the right place (copied 100% from the original virus)
- A codon optimized version of the original spike, with two ‘Proline’ substitutions to make sure the protein appears in the right form
- A known successful and optimized 3’ untranslated region
- A slightly mysterious poly-A tail with an unexplained ‘linker’ in there

The codon optimization adds a lot of G and C to the mRNA. Meanwhile, using Ψ (1-methyl-3’-pseudouridylyl) instead of U helps evade our immune system, so the mRNA stays around long enough so we can actually help train the immune system.
You can read the detail in the link above, which is fascinating.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/01/2021 10:37:19
Quote
In this video Elon Musk talks about Tesla Full Self driving software remotely at a Chinese AI conference. Elon predicts that Tesla will achieve level 5 autonomy soon and sooner than people can imagine. Elon also indirectly criticizes Waymo, a googles self-driving software company. Waymo depends on LiDAR and HD maps. Most of the time, they train their self-driving software and car in simulation.

In this video he emphasizes that understanding reality is essentially a data compression process. I've mentioned this previously in this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/01/2021 02:53:06
Here are very informative videos explaining how Tesla autopilot was developed.



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/01/2021 03:30:15
Some main points I get from the videos are:
- Autopilot builds a virtual universe in its memory space to represent its surrounding environment based on data input from its sensors.
- Modular concepts are employed to increase efficiency, so many things don't have to start from scratch again everytime new feature is added.
- Building the virtual universe is done in real time which means a lot of new data is acquired, hence a lot of older data must be discarded. Therefore, to make the system work, it must compress the incoming data into meaningful and useful concepts, after filtering out noises and insignificant information.
- Those data selection requires data hierarchy like deep believe network I mentioned earlier. Higher level information (believe) determine which data from lower level believe nodes to be kept and used or discarded and ignored. It's similar to how human brain works. That's why sometimes we find it hard to convince people by simply presenting facts that contradict their existing believe system, such as flat earthers, MAGA crowd, or religious fanatics.
- The automation process is kept being automated, up into several levels of automation. We are building machines that build machines that build machines, and so on, as Ray Kurzweil called indirection. And those machines are getting better at achieveing their goals put into them. That's why it's getting more urgent for us to find a universal terminal goal, as I discuss in another thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/01/2021 05:59:48
Quote
Wherein Dr. Know-it-all discusses the work of Dr. Arthur Choi (UCLA) and others concerning the quest to understand how deep convolutional neural networks function. This new field, XAI, or explainable AI, uses decision trees, formal logic, and even tractable boolean circuits (simulated logic gates) to explain why machine learning using deep neural nets functions so well some of the time, but so poorly other times.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2021 22:32:59
Last year may have severed our connections with the physical world, but in the digital realm, AI thrived. Take NeurIps, the crown jewel of AI conferences. While lacking the usual backdrop of the dazzling mountains of British Columbia or the beaches of Barcelona, the annual AI extravaganza highlighted a slew of “big picture” problems—bias, robustness, generalization—that will encompass the field for years to come.

On the nerdier side, scientists further explored the intersection between AI and our own bodies. Core concepts in deep learning, such as backpropagation, were considered a plausible means by which our brains “assign fault” in biological networks—allowing the brain to learn. Others argued it’s high time to double-team intelligence, combining the reigning AI “golden child” method—deep learning—with other methods, such as those that guide efficient search.

Here are four areas we’re keeping our eyes on in 2021. They touch upon outstanding AI problems, such as reducing energy consumption, nixing the need for exuberant learning examples, and teaching AI some good ole’ common sense.

https://singularityhub.com/2021/01/05/2021-could-be-a-banner-year-for-ai-if-we-solve-these-4-problems/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2021 03:27:21
https://towardsdatascience.com/introduction-to-bayesian-inference-18e55311a261

Motivation
Imagine the following scenario: you are driving an ambulance to a hospital and have to decide between route A and B. In order to save your patient, you need to arrive in less than 15 minutes. If we estimate that route A takes 12 minutes and route B takes 10 minutes, which would you choose? Route B seems faster, so why not?
The information provided so far consisted of point estimates of routes A and B. Now, let’s add information about the uncertainty of each prediction: route A takes 12 min ±1min, while route B takes 10 min ±6min.
Now it seems like the prediction of route B is significantly more uncertain, eventually risking taking longer than the 15 minute limit. Adding information about uncertainty here can make us change our decision from taking route B to taking route A.

More broadly, consider the following cases:
We want to estimate a quantity which does not have a fixed value — instead, it can change between different ones
Regardless of the true value being fixed or not, we are interested in knowing the uncertainty of our estimation
The ambulance example was intended to illustrate the second case. For the first case, we can have a quick look at the work of Nobel Prize winning economist Christopher Sims. I will simply cite his student Toshiaki Watanabe:
I once asked Chris why he favoured the Bayesian approach. He replied by pointing to the Lucas critique, which argues that when government and central bank policies change, so do the model parameters, so that they should be regarded not as constants but as stochastic variables.
For both cases, Bayesian inference can be used to model our variables of interest as a whole distribution, instead of a unique value or point estimate.

Judea Pearl describes it this way, in The Book of Why [2]:
(…) Bayes’s rule is formally an elementary consequence of his definition of conditional probability. But epistemologically, it is far from elementary. It acts, in fact, as a normative rule for updating beliefs in response to evidence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2021 07:45:31
“Our approach is pretty much the exact opposite of the traditional pharmaceutical approach. With our approach, there is no drug, no poison at all – just a little program written in DNA. We’ve effectively taken targeting out of the realm of chemistry and brought it into the realm of information.”
Matthew Scholz, Co-founder & CEO, Oisín Biotechnologies

https://www.longevity.technology/promising-restorative-therapy-could-potentially-be-available-within-5-years/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/01/2021 06:57:51
The Coronavirus Is Mutating. Here’s What We Know | WSJ

Another example how an accurate virtual universe can help to accelerate research through trial and error by saving required resources, especially time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2021 07:17:28
The Promise (And Realities) Of AI / ML
https://energycentral.com/c/iu/promise-and-realities-ai-ml
Quote
Artificial Intelligence has been getting a bad rap of late, with numerous opinion pieces and articles describing how it has struggled to live up to the hype. Arguments have centered around computational cost, lack of high-quality data, and the difficulty in getting past the high nineties in percent accuracy, all resulting in the continued need to have humans in the loop.
Quote
AI & ML are simply tools for building complex (and sometimes non-linear) models that consider large amounts of information. They are most potent in applications where their pattern finding power significantly exceeds human capability. If we adjust our attitude and expectations, we can leverage their power to bring about all sorts of tangible outcomes for humanity.

With this type of re-calibration, our mission should be to use AI to help human decision makers, rather than replace them. Machine learning is now being used to build weather and climate impact models that help infrastructure managers respond with accuracy and allocate their resources efficiently. While these models do not perfectly match the ground truth, they are much more accurate and precise than simple heuristics, and can save millions of dollars through more efficient capital allocation.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2021 07:26:05
https://spectrum.ieee.org/computing/software/its-too-easy-to-hide-bias-in-deeplearning-systems
Artificial intelligence makes it hard to tell when decision-making is biased
Quote
When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. “You can select people who are interested in football, and they live in Cote d’Azur, and they were at this college, and they also like drinking,” Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that’s because Facebook doesn’t want to appear creepy; the company declined to comment for this article, so it’s hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users’ suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what’s being revealed is why the algorithm chose a particular ad to show you.

The world around us is increasingly choreographed by such algorithms. They decide what advertisements, news, and movie recommendations you see. They also help to make far more weighty decisions, determining who gets loans, jobs, or parole. And in the not-too-distant future, they may decide what medical treatment you’ll receive or how your car will navigate the streets. People want explanations for those decisions. Transparency allows developers to debug their software, end users to trust it, and regulators to make sure it’s safe and fair.

The problem is that these automated systems are becoming so frighteningly complex that it’s often very difficult to figure out why they make certain decisions. So researchers have developed algorithms for understanding these decision-making automatons, forming the new subfield of explainable AI.
Quote
In 2017, the Defense Advanced Research Projects Agency launched a US $75 million XAI project. Since then, new laws have sprung up requiring such transparency, most notably Europe’s General Data Protection Regulation, which stipulates that when organizations use personal data for “automated decision-making, including profiling,” they must disclose “meaningful information about the logic involved.” One motivation for such rules is a concern that black-box systems may be hiding evidence of illegal, or perhaps just unsavory, discriminatory practices.
Quote
As a result, XAI systems are much in demand. And better policing of decision-making algorithms would certainly be a good thing. But even if explanations are widely required, some researchers worry that systems for automated decision-making may appear to be fair when they really aren’t fair at all.

For example, a system that judges loan applications might tell you that it based its decision on your income and age, when in fact it was your race that mattered most. Such bias might arise because it reflects correlations in the data that was used to train the AI, but it must be excluded from decision-making algorithms lest they act to perpetuate unfair practices of the past.

The challenge is how to root out such unfair forms of discrimination. While it’s easy to exclude information about an applicant’s race or gender or religion, that’s often not enough. Research has shown, for example, that job applicants with names that are common among African Americans receive fewer callbacks, even when they possess the same qualifications as someone else.

A computerized résumé-screening tool might well exhibit the same kind of racial bias, even if applicants were never presented with checkboxes for race. The system may still be racially biased; it just won’t “admit” to how it really works, and will instead provide an explanation that’s more palatable.

Regardless of whether the algorithm explicitly uses protected characteristics such as race, explanations can be specifically engineered to hide problematic forms of discrimination. Some AI researchers describe this kind of duplicity as a form of “fairwashing”: presenting a possibly unfair algorithm as being fair.

 Whether deceptive systems of this kind are common or rare is unclear. They could be out there already but well hidden, or maybe the incentive for using them just isn’t great enough. No one really knows. What’s apparent, though, is that the application of more and more sophisticated forms of AI is going to make it increasingly hard to identify such threats.
Quote
No company would want to be perceived as perpetuating antiquated thinking or deep-rooted societal injustices. So a company might hesitate to share exactly how its decision-making algorithm works to avoid being accused of unjust discrimination. Companies might also hesitate to provide explanations for decisions rendered because that information would make it easier for outsiders to reverse engineer their proprietary systems. Cynthia Rudin, a computer scientist at Duke University, in Durham, N.C., who studies interpretable machine learning, says that the “explanations for credit scores are ridiculously unsatisfactory.” She believes that credit-rating agencies obscure their rationales intentionally. “They’re not going to tell you exactly how they compute that thing. That’s their secret sauce, right?”

And there’s another reason to be cagey. Once people have reverse engineered your decision-making system, they can more easily game it. Indeed, a huge industry called “search engine optimization” has been built around doing just that: altering Web pages superficially so that they rise to the top of search rankings.
What I see from the trend is that information technology is converging toward the building of a virtual universe. Competitions to become the first/biggest/best AI system builder for selfish motivations could be directed to a more collaborative efforts by promoting a universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/01/2021 11:52:07
'Liquid' machine-learning system adapts to changing conditions

Quote
MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed "liquid" networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.   
https://techxplore.com/news/2021-01-liquid-machine-learning-conditions.amp?__twitter_impression=true

Quote
  Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of "training" examples. They're often said to mimic the processing pathways of the brain—Hasani drew inspiration directly from the microscopic nematode, C. elegans. "It only has 302 neurons in its nervous system," he says, "yet it can generate unexpectedly complex dynamics."

Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations. 

In the future, we will have AI that keeps learning from real world experience, not just in training phase. They are getting more humanlike.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/01/2021 12:55:51
Quote
   Risky behaviors such as smoking, alcohol and drug use, speeding, or frequently changing sexual partners result in enormous health and economic consequences and lead to associated costs of an estimated 600 billion dollars a year in the US alone. In order to define measures that could reduce these costs, a better understanding of the basis and mechanisms of risk-taking is needed.
Quote
Specific characteristics were found in several areas of the brain: In the hypothalamus, where the release of hormones (such as orexin, oxytocin and dopamine) controls the vegetative functions of the body; in the hippocampus, which is essential for storing memories; in the dorsolateral prefrontal cortex, which plays an important role in self-control and cognitive deliberation; in the amygdala, which controls, among other things, the emotional reaction to danger; and in the ventral striatum, which is activated when processing rewards.   
Quote
  The researchers were surprised by the measurable anatomical differences they discovered in the cerebellum, an area that is not usually included in studies of risk behaviors on the assumption that it is mainly involved in fine motor functions. In recent years, however, significant doubts have been raised about this hypothesis – doubts which are now backed by the current study. 
Quote
  “It appears that the cerebellum does after all play an important role in decision-making processes such as risk-taking behavior,” confirms Aydogan. “In the brains of more risk-tolerant individuals, we found less gray matter in these areas. How this gray matter affects behavior, however, still needs to be studied further.” 
https://neurosciencenews.com/brain-risky-behavior-17633/

Risk taking is an important factor in decision making, which we need to deeply understand so it can be simulated in a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 15:28:03
Someone has come into similar conclusion as I posted here. This GME 'riot' should be a wake up call.
Quote
  Joscha Bach (@Plinz) tweeted at 11:20 AM on Fri, Jan 29, 2021:
In the long run, machine learning and a publicly accessible stock market cannot coexist
(https://twitter.com/Plinz/status/1355007909281718274?s=03) 

Quote
  Joscha Bach (@Plinz) tweeted at 7:54 PM on Fri, Jan 29, 2021:
The financial system is software executed by humans, full of holes and imperfections, and very hard to update and maintain. Using substantial computational resources to discover and exploit its imperfections will eventually nuke it into oblivion
(https://twitter.com/Plinz/status/1355137134789681158?s=03) 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 23:37:13
A financial system should be a tool to redistribute resources to optimally achieving common goals of the society. It's akin to circulatory system in multicellular organisms.
While current financial system enables innovators to thrive by convincing people to contribute to their inventions, and gain profit from them, it also enables other financial actors to gamble with someone else's money. They get profits when they win, but then get away or bailed out when they lose.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/01/2021 23:44:00
A free market supposed to be a self organizing system. But if some parts of the system aggregate and accumulate enough power to manipulate or bypass self regulatory functions, they can accumulate more resources for themselves while depriving and sacrificing others, making the entire structure to collapse. It's akin to behavior of cancerous cells.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/01/2021 03:28:17
https://www.engadget.com/autox-fully-driverless-robotaxi-china-145126521.html
Quote
  Driverless robotaxis are now available for public rides in China
AutoX is the first in the country to offer rides without safety drivers. 
Quote
  After lots of tests, it’s now possible to hail a truly driverless robotaxi in China. AutoX has become the first in the country to offer public rides in autonomous vehicles without safety drivers. You’ll need to sign up for a pilot program in Shenzhen and use membership credits, but after that you can hop in a modified Chrysler Pacifica to travel across town without seeing another human being. 

Quote
  Fully driverless robotaxis are still very rare anywhere in the world, and it’ll take a combination of refined technology and updated regulation before they’re relatively commonplace. This is an important step in that direction, though. They might get a boost in the current climate, though. The COVID-19 pandemic has added risk to conventional ride hailing for both drivers and passengers, and removing drivers could make this one of the safest travel options for people without cars of their own. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/01/2021 05:56:09
https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/

Quote
  In the blog post where it declared the GPT-3 API, OpenAI stated three key reasons for not open-sourcing the deep learning model. The first was, obviously, to cover the costs of their ongoing research. Second, but equally important, is running GPT-3 requires vast compute resources that many companies don’t have. Third (which I won’t get into in this post) is to prevent misuse and harmful applications.

Based on this information, we know that to make GPT-3 profitable, OpenAI will need to break even on the costs of research and development, and also find a business model that turns in profits on the expenses of running the model. 

Quote
  In general, machine learning algorithms can perform a single, narrowly defined task. This is especially true for natural language processing, which is much more complicated than other fields of artificial intelligence. To repurpose a machine learning model for a new task, you must retrain it from scratch or fine-tune it with new examples, a process known as transfer learning.

But contrary to other machine learning models, GPT-3 is capable of zero-shot learning, which means it can perform many new tasks without the need for new training. For many other tasks, it can perform one-shot learning: Give it one example and it will be able to expand to other similar tasks. Theoretically, this makes it ideal as a general-purpose AI technology that can support many new applications.

Significant portion of their research budget goes to the stellar salaries OpenAI has to pay the highly coveted AI talent it has hired for the task. I wonder how long it would take for the AGI to surpass the capability of its own creators, so human AI talents are no longer needed. It looks like they are facing a dilemma. If they don't do it, their competitors are ready to surpass them, which would make their past and current efforts meaningless.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/02/2021 11:21:29
https://venturebeat.com/2021/01/28/ai-holds-the-key-to-even-better-ai/
Quote
  For all the talk about how artificial intelligence technology is transforming entire industries, the reality is that most businesses struggle to obtain real value from AI. 65% of organizations that have invested in AI in recent years haven’t yet seen any tangible gains from those investments, according to a 2019 survey conducted by MIT Sloan Management Review and the Boston Consulting Group. And a quarter of businesses implementing AI projects see at least 50% of those projects fail, with “lack of skilled staff” and “unrealistic expectations” among the top reasons for failure, per research from IDC. 
Quote
  Encouragingly, AI is already being leveraged to simplify other tech-related tasks, like writing and reviewing code (which itself is built by AI). The next phase of the deep learning revolution will involve similar complementary tools. Over the next five years, expect to see such capabilities slowly become available commercially to the public. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/02/2021 06:35:31
https://www.linkedin.com/pulse/fake-news-rampant-here-how-artificial-intelligence-ai-bernard-marr/
Quote
One of the latest collaborations between artificial intelligence and humans is further evidence of how machines and humans can create better results when working together. Artificial intelligence (AI) is now on the job to combat the spread of misinformation on the internet and social platforms thanks to the efforts of start-ups such as Logically. While AI is able to analyze the enormous amounts of info generated daily on a scale that's impossible for humans, ultimately, humans need to be part of the process of fact-checking to ensure credibility. As Lyric Jain, founder and CEO of Logically, said, toxic news travels faster than the truth. Our world desperately needs a way to discern truth from fiction in our news and public, political and economic discussions, and artificial intelligence will help us do that.
Quote
The Fake News “Infodemic”

People are inundated with info every single day. Each minute, there are 98,000 tweets, 160 million emails sent, and 600 videos uploaded to YouTube. Politicians. Marketers. News outlets. Plus, there are countless individuals spewing their opinions since self-publishing is so easy. People crave a way to sort through all the information to find valuable nuggets they can use in their own life. They want facts, and companies are starting to respond often by using machine learning and AI tools.
Quote
As the pursuit of fighting fake news becomes more sophisticated, technology leaders will continue to work to find even better ways to sort out fact from fiction also well as refine the AI tools that can help fight disinformation. Deep learning can help automate some of the steps in fake news detection, according to a team of researchers at DarwinAI and Canada's University of Waterloo. They are segmenting fact-checking into various sub-tasks, including stance detection where the system is given a claim on a news story plus other stories on the same subject to determine if those other stories support or refute the claim in the original piece.
As long as we believe that there's an objective reality, we will need reliable information sources which reflect it accurately, or at least are consistent with each other. This trend seems to keep getting us closer to a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/02/2021 06:40:03
This is why we need the ability to distinguish between objective reality vs alternative reality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/02/2021 12:33:13
To simulate the universe, it is necessary to simulate consciousness as well, and we need to understand it first.

A new theory of brain organization takes aim at the mystery of consciousness

https://neurosciencenews.com/brain-organization-consciousness-15132/
Quote
Consciousness is one of the brain’s most enigmatic mysteries. A new theory, inspired by thermodynamics, takes a high-level perspective of how neural networks in the brain transiently organize to give rise to memories, thought and consciousness.

The key to awareness is the ebb and flow of energy: when neurons functionally tag together to support information processing, their activity patterns synchronize like ocean waves. This process is inherently guided by thermodynamic principles, which — like an invisible hand — promotes neural connections that favors conscious awareness. Disruptions in this process breaks down communication between neural networks, giving rise to neurological disorders such as epilepsy, autism or schizophrenia.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/02/2021 10:59:28
https://www.quantamagazine.org/brains-background-noise-may-hold-clues-to-persistent-mysteries-20210208/

Quote

Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries

NEUROSCIENCE
Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries
By
ELIZABETH LANDAU
February 8, 2021

By digging out signals hidden within the brain’s electrical chatter, scientists are getting new insights into sleep, aging and more.

An illustration of a human brain against “pink noise” static.
Olena Shmahalo/Quanta Magazine; noise generated by Thomas Donoghue
At a sleep research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.

Lendner argued, though, that the answer isn’t in the regular brain waves, but rather in an aspect of neural activity that scientists might normally ignore: the erratic background noise.

Some researchers seemed incredulous. “They said, ‘So, you’re telling me that there’s, like, information in the noise?’” said Lendner, an anesthesiology resident at the University Medical Center in Tübingen, Germany, who recently completed a postdoc at the University of California, Berkeley. “I said, ‘Yes. Someone’s noise is another one’s signal.’
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/02/2021 12:53:10
Mind Reading For Brain-To-Text Communication!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 05:08:57
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
The learning algorithm that enables the runaway success of deep neural networks doesn’t work in biological brains, but researchers are finding alternatives that could.

Quote
Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.

But real brains are highly unlikely to be relying on the same algorithm. It’s not just that “brains are able to generalize and learn better and faster than the state-of-the-art AI systems,” said Yoshua Bengio, a computer scientist at the University of Montreal, the scientific director of the Quebec Artificial Intelligence Institute and one of the organizers of the 2007 workshop. For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Simulating-a-Neuron.svg)
Quote
However, it was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.

The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.
To understand this process, think of a “loss function” that describes the difference between the inferred and desired outputs as a landscape of hills and valleys. When a network makes an inference with a given set of synaptic weights, it ends up at some location on the loss landscape. To learn, it needs to move down the slope, or gradient, toward some valley, where the loss is minimized to the extent possible. Backpropagation is a method for updating the synaptic weights to descend that gradient.

In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance. This calculation proceeds sequentially backward from the output layer to the input layer, hence the name backpropagation. Do this over and over for sets of inputs and desired outputs, and you’ll eventually arrive at an acceptable set of weights for the entire neural network.
Quote
Impossible for the Brain
The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”

Backprop is considered biologically implausible for several major reasons. The first is that while computers can easily implement the algorithm in two phases, doing so for biological neural networks is not trivial. The second is what computational neuroscientists call the weight transport problem: The backprop algorithm copies or “transports” information about all the synaptic weights involved in an inference and updates those weights for more accuracy. But in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output. From a neuron’s point of view, “it’s OK to know your own synaptic weights,” said Yamins. “What’s not okay is for you to know some other neuron’s set of synaptic weights.”

(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Backpropagation.svg)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 05:28:05
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
Predicting Perceptions
The constraint that neurons can learn only by reacting to their local environment also finds expression in new theories of how the brain perceives. Beren Millidge, a doctoral student at the University of Edinburgh and a visiting fellow at the University of Sussex, and his colleagues have been reconciling this new view of perception — called predictive coding — with the requirements of backpropagation. “Predictive coding, if it’s set up in a certain way, will give you a biologically plausible learning rule,” said Millidge.

Predictive coding posits that the brain is constantly making predictions about the causes of sensory inputs. The process involves hierarchical layers of neural processing. To produce a certain output, each layer has to predict the neural activity of the layer below. If the highest layer expects to see a face, it predicts the activity of the layer below that can justify this perception. The layer below makes similar predictions about what to expect from the one beneath it, and so on. The lowest layer makes predictions about actual sensory input — say, the photons falling on the retina. In this way, predictions flow from the higher layers down to the lower layers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 09:05:16
Quote
Pyramidal Neurons
Some scientists have taken on the nitty-gritty task of building backprop-like models based on the known properties of individual neurons. Standard neurons have dendrites that collect information from the axons of other neurons. The dendrites transmit signals to the neuron’s cell body, where the signals are integrated. That may or may not result in a spike, or action potential, going out on the neuron’s axon to the dendrites of post-synaptic neurons.

But not all neurons have exactly this structure. In particular, pyramidal neurons — the most abundant type of neuron in the cortex — are distinctly different. Pyramidal neurons have a treelike structure with two distinct sets of dendrites. The trunk reaches up and branches into what are called apical dendrites. The root reaches down and branches into basal dendrites.
(https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/02/Neurons.svg)
Quote
Models developed independently by Kording in 2001, and more recently by Blake Richards of McGill University and the Quebec Artificial Intelligence Institute and his colleagues, have shown that pyramidal neurons could form the basic units of a deep learning network by doing both forward and backward computations simultaneously. The key is in the separation of the signals entering the neuron for forward-going inference and for backward-flowing errors, which could be handled in the model by the basal and apical dendrites, respectively. Information for both signals can be encoded in the spikes of electrical activity that the neuron sends down its axon as an output.

In the latest work from Richards’ team, “we’ve gotten to the point where we can show that, using fairly realistic simulations of neurons, you can train networks of pyramidal neurons to do various tasks,” said Richards. “And then using slightly more abstract versions of these models, we can get networks of pyramidal neurons to learn the sort of difficult tasks that people do in machine learning.”
There are so much information densely packed into a single article. I found it hard to compress it any further.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2021 09:18:48
Quote
The Role of Attention
An implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons. But “there is no teacher in the brain that tells every neuron in the motor cortex, ‘You should be switched on and you should be switched off,’” said Pieter Roelfsema of the Netherlands Institute for Neuroscience in Amsterdam.
Quote
Roelfsema thinks the brain’s solution to the problem is in the process of attention. In the late 1990s, he and his colleagues showed that when monkeys fix their gaze on an object, neurons that represent that object in the cortex become more active. The monkey’s act of focusing its attention produces a feedback signal for the responsible neurons. “It is a highly selective feedback signal,” said Roelfsema. “It’s not an error signal. It is just saying to all those neurons: You’re going to be held responsible [for an action].”

Roelfsema’s insight was that this feedback signal could enable backprop-like learning when combined with processes revealed in certain other neuroscientific findings. For example, Wolfram Schultz of the University of Cambridge and others have shown that when animals perform an action that yields better results than expected, the brain’s dopamine system is activated. “It floods the whole brain with neural modulators,” said Roelfsema. The dopamine levels act like a global reinforcement signal.

In theory, the attentional feedback signal could prime only those neurons responsible for an action to respond to the global reinforcement signal by updating their synaptic weights, said Roelfsema. He and his colleagues have used this idea to build a deep neural network and study its mathematical properties. “It turns out you get error backpropagation. You get basically the same equation,” he said. “But now it became biologically plausible.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/03/2021 08:05:07
Imagine how much you can gain just from the stock market, if you have clear insight of what would happen in the future.
This video was from 2010.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2021 11:49:00
Taming Transformers for High-Resolution Image Synthesis

It seems like we are getting better at building information processors comparable to human brains.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2021 12:25:39
In not so distant future, most information available online will be generated by AI.

That prediction will force us to build a virtual universe which is intended to accurately represent objective reality. Otherwise, there will be no way to distinguish facts from fictions, especially for something which are not widely known already.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/03/2021 07:31:14
Has Google Search changed much since 1998?
This video shows how Google has evolved to getting closer to building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/03/2021 08:58:40
Tesla's Autopilot, Full Self Driving, Neural Networks & Dojo
Quote
In this video I react to a discussion from the Lex Fridman podcast with legendary chip designer Jim Keller (ex-Tesla) sharing their thoughts on computer vision, neural networks, Tesla's autopilot and full self driving software (and hardware), autonomous vehicles, deep learning and Tesla Dojo (Tesla's dojo is a training system).
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/03/2021 12:08:20
More reason to replace the lawmakers with AI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/03/2021 02:44:55
The Most Advanced Digital Government in the World
Quote
A small European country is leading the world in establishing an “e-government” for its citizens.

Estonia's fully online, e-government system has been revolutionary for the country's citizens, making tasks like voting, filing taxes, and renewing a driver’s license quick and convenient.

In operation since 2001, “e-Estonia” is now a well-oiled, digital machine. Estonia was the first country to hold a nationwide election online, and ministers dictate decisions via an e-Cabinet.

Estonia was also the first country to declare internet access a human right. 99% of public services are available digitally 24/7, excluding only marriage, divorce, and real-estate transactions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/03/2021 22:43:41
https://www.nextplatform.com/2021/03/11/its-time-to-start-paying-attention-to-vector-databases/amp/

Quote
The concepts underpinning vector databases are decades old, but it is only relatively recently that these are the underlying “secret weapon” of the largest webscale companies that provide services like search and near real-time recommendations.

Like all good clandestine competitive tools, the vector databases that support these large companies are all purpose-built in-house, optimized for the types of similarity search operations native to their business (content, physical products, etc.).

These custom-tailored vector databases are the “unsung hero of big machine learning,” says Edo Liberty, who built tools like this at Yahoo Research during its scalable machine learning platform journey. He carried some of this over to AWS, where he ran Amazon AI labs and helped cobble together standards like AWS Sagemaker, all the while learning how vector databases could integrate with other platforms and connect with the cloud.

“Vector databases are a core piece of infrastructure that fuels every big machine learning deployment in industry. There was never a way to do this directly, everyone just had to build their own in-house,” he tells The Next Platform. The funny thing is, he was working on high dimensional geometry during his PhD days; the AI/ML renaissance just happened to perfectly intersect with exactly that type of work.

“In ML, suddenly everything was being represented as these high-dimensional vectors, that quickly became a huge source of data, so it you want to search, rank or give recommendations, the object in your actual database wasn’t a document or an image—it was this mathematical representation of the machine learning model.” In short, this quickly became important for a lot of companies.
I think that the virtual universe would be built upon vector database foundation at its core system. This assessment is based on my experience in some system migration projects, which pushed me to reverse engineer a system database to make a tool to accelerate the process by automating some tasks.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2021 09:58:17
Quote
The Senate filibuster is one of the biggest things standing in the way of anti-voter suppression laws, raising the minimum wage and immigration reform. What is this loophole, and how does it affect governing today?
Lawmaking process obviously needs to get more efficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2021 21:36:28
ISO standard basically said that you've got to document everything and track it. Write what you do, do what you write.
What you write is a virtual version of what you do. In the past, they are on papers. Now they are in computer data storages.
This virtual version of the real world supposed to be easier to process, aggregate, simulate, extract, to produce required information in decision making process. To be useful, they must have adequate accuracy and precision.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2021 02:58:52
https://www.wired.co.uk/article/marcus-du-sautoy-maths-proofs
Quote
Maths nerds, get ready: an AI is about to write its own proofs
We'll see the first truly creative proof of a mathematical theorem written by an artificial intelligence – and soon

It might come as a surprise to some people that this prediction hasn’t already come to pass. Given that mathematics is a subject of logic and precision, it would seem to be perfect territory for a computer.

However, in 2021, we will see the first truly creative proof of a mathematical theorem by an artificial intelligence (AI). As a mathematician, this fills me with excitement and anxiety in equal measure. Excitement for the new insights that AI might give the mathematical community; anxiety that we human mathematicians might soon become obsolete. But part of this belief is based on a misconception about what a mathematician does.

More recently, techniques of machine learning have been used to gain an understanding from a database of successful proofs to generate more proofs. But although the proofs are new, they do not pass the test of exciting the mathematical mind. It’s the same for powerful algorithms, which can generate convincing short-form text, but are a long way from writing a novel.

But in 2021 I think we will see – or at least be close to – an algorithm with the ability to write its first mathematical story. Storytelling through the written word is based on millions of years of human evolution, and it takes a human many years to reach the maturity to write a novel. But mathematics is a much younger evolutionary development. A person immersed in the mathematical world can reach maturity quite quickly, which is why one sees mathematical breakthroughs made by young minds.


This is why I think that it won’t take long for an AI to understand the quality of the proofs we love and celebrate, before it too will be writing proofs. Perhaps, given its internal architecture, these may be mathematical theorems about networks – a subject that deserves its place on the shelves of the mathematical libraries we humans have been filling for centuries.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/03/2021 09:13:53
Quote
What is love and what defines art? Humans have theorized, debated, and argued over these questions for centuries. As researchers become closer and closer to boiling these concepts down to a science, A.I. projects become closer to becoming alternatives for romantic companions and artists in their own right.

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

0:00​ Introduction
0:50​ The Model Companion
11:02​ Can A.I. Make Real Art?
23:05​ The Autonomous Supercar
36:41​ The Hard Problem
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/03/2021 06:01:22
5 Crazy Simulations That Were Previously Impossible
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 03:20:22
https://scitechdaily.com/300-covid-19-machine-learning-models-have-been-developed-none-is-suitable-for-detecting-or-diagnosing/
Quote
Machine learning is a promising and potentially powerful technique for detection and prognosis of disease. Machine learning methods, including where imaging and other data streams are combined with large electronic health databases, could enable a personalized approach to medicine through improved diagnosis and prediction of individual responses to therapies.

“However, any machine learning algorithm is only as good as the data it’s trained on,” said first author Dr. Michael Roberts from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “Especially for a brand-new disease like COVID-19, it’s vital that the training data is as diverse as possible because, as we’ve seen throughout this pandemic, there are many different factors that affect what the disease looks like and how it behaves.”

“The international machine learning community went to enormous efforts to tackle the COVID-19 pandemic using machine learning,” said joint senior author Dr James Rudd, from Cambridge’s Department of Medicine. “These early studies show promise, but they suffer from a high prevalence of deficiencies in methodology and reporting, with none of the literature we reviewed reaching the threshold of robustness and reproducibility essential to support use in clinical practice.”

Many of the studies were hampered by issues with poor quality data, poor application of machine learning methodology, poor reproducibility, and biases in study design. For example, several training datasets used images from children for their ‘non-COVID-19’ data and images from adults for their COVID-19 data. “However, since children are far less likely to get COVID-19 than adults, all the machine learning model could usefully do was to tell the difference between children and adults, since including images from children made the model highly biased,” said Roberts.

Many of the machine learning models were trained on sample datasets that were too small to be effective. “In the early days of the pandemic, there was such a hunger for information, and some publications were no doubt rushed,” said Rudd. “But if you’re basing your model on data from a single hospital, it might not work on data from a hospital in the next town over: the data needs to be diverse and ideally international, or else you’re setting your machine learning model up to fail when it’s tested more widely.”

In many cases, the studies did not specify where their data had come from, or the models were trained and tested on the same data, or they were based on publicly available ‘Frankenstein datasets’ that had evolved and merged over time, making it impossible to reproduce the initial results.
Title: Re: How close are we from building a virtual universe?
Post by: Michael Sally on 29/03/2021 03:37:49
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.

I read a paper recently that describes the core of a photon as (x0,y0,z0)  which I thought was quite impressive in regards to accuracy and precision .

In regards to a virtual universe I consider that would be the smallest element of possible information , a tuple .

I would then consider that any  other elements of informational dimensions would be n-tuples  (xn,yn,zn) 

My reasoning for this is that any amount of information greater than the (x0,y0,z0) element , is expansive information .

(x1,y1,z1......n)

In simple terms a point of information reads true (absolute answers)  where expansions of information reads false (speculative) .

In example c reads false , c is based on our measurement system. In simultaneity a duration of 1.s is arguable .









Title: Re: How close are we from building a virtual universe?
Post by: Kryptid on 29/03/2021 05:54:14
In simple terms a point of information reads true (absolute answers)  where expansions of information reads false (speculative) .

According to what source?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 06:40:28
Where Did Bitcoin Come From? – The True Story
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 10:38:02
What a digital government looks like
Quote
What if you never had to fill out paperwork again? In Estonia, this is a reality: citizens conduct nearly all public services online, from starting a business to voting from their laptops, thanks to the nation's ambitious post-Soviet digital transformation known as "e-Estonia." One of the program's experts, Anna Piperal, explains the key design principles that power the country's "e-government" -- and shows why the rest of the world should follow suit to eradicate outdated bureaucracy and regain citizens' trust.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2021 10:55:29
MIT 6.S191: Evidential Deep Learning and Uncertainty

MIT Introduction to Deep Learning 6.S191: Lecture 7
Evidential Deep Learning and Uncertainty Estimation
Lecturer: Alexander Amini
January 2021

For all lectures, slides, and lab materials: http://introtodeeplearning.com​​

Lecture Outline
0:00​​ - Introduction and motivation
5:00​​ - Outline for lecture
5:50​ - Probabilistic learning
8:33​ - Discrete vs continuous target learning
14:12​ - Likelihood vs confidence
17:40​ - Types of uncertainty
21:15​ - Aleatoric vs epistemic uncertainty
22:35​ - Bayesian neural networks
28:55​ - Beyond sampling for uncertainty
31:40​ - Evidential deep learning
33:29​ - Evidential learning for regression and classification
42:05​ - Evidential model and training
45:06​ - Applications of evidential learning
46:25​ - Comparison of uncertainty estimation approaches
47:47​ - Conclusion
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/03/2021 12:04:30
Objective reality contains a lot of objects with complex relationships among them. Hence to build a virtual universe we must use a method capable of storing data to represent the complex system. The obvious choice is using graphs, which are a mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines).

Graph Databases Will Change Your Freakin' Life (Best Intro Into Graph Databases)

Quote
## WTF is a graph database
- Euler and Graph Theory
- Math -- it's hard, let's skip it
- It's about data -- lots of it
- But let's zoom in and look at the basics
## Relational model vs graph model
- How do we represent THINGS in DBs
- Relational vs Graph
- Nodes and Relationships
## Why use a graph over a relational DB or other NoSQL?
- Very simple compared to RDBMS, and much more flexible
- The real power is in relationship-focused data (most NoSQL dbs don't treat relationships as first-order)
- As related-ness and amount of data increases, so does advantage of Graph DBs
- Much closer to our whiteboard model

EVENT: Nodevember 2016

SPEAKER: Ed Finkler
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2021 13:10:09
https://scitechdaily.com/explainable-artificial-intelligence-for-decoding-regulatory-instructions-in-dna/
Quote
Opening the black box to uncover the rules of the genome’s regulatory code.
Researchers at the Stowers Institute for Medical Research, in collaboration with colleagues at Stanford University and Technical University of Munich, have developed advanced explainable artificial intelligence (AI) in a technical tour de force to decipher regulatory instructions encoded in DNA. In a report published online on February 18, 2021, in Nature Genetics, the team found that a neural network trained on high-resolution maps of protein-DNA interactions can uncover subtle DNA sequence patterns throughout the genome and provide a deeper understanding of how these sequences are organized to regulate genes.

Neural networks are powerful AI models that can learn complex patterns from diverse types of data such as images, speech signals, or text to predict associated properties with impressive high accuracy. However, many see these models as uninterpretable since the learned predictive patterns are hard to extract from the model. This black-box nature has hindered the wide application of neural networks to biology, where interpretation of predictive patterns is paramount.

One of the big unsolved problems in biology is the genome’s second code—its regulatory code. DNA bases (commonly represented by letters A, C, G, and T) encode not only the instructions for how to build proteins, but also when and where to make these proteins in an organism. The regulatory code is read by proteins called transcription factors that bind to short stretches of DNA called motifs. However, how particular combinations and arrangements of motifs specify regulatory activity is an extremely complex problem that has been hard to pin down.

Now, an interdisciplinary team of biologists and computational researchers led by Stowers Investigator Julia Zeitlinger, PhD, and Anshul Kundaje, PhD, from Stanford University, have designed a neural network—named BPNet for Base Pair Network—that can be interpreted to reveal regulatory code by predicting transcription factor binding from DNA sequences with unprecedented accuracy. The key was to perform transcription factor-DNA binding experiments and computational modeling at the highest possible resolution, down to the level of individual DNA bases. This increased resolution allowed them to develop new interpretation tools to extract the key elemental sequence patterns such as transcription factor binding motifs and the combinatorial rules by which motifs function together as a regulatory code.

Quote
“More traditional bioinformatics approaches model data using pre-defined rigid rules that are based on existing knowledge. However, biology is extremely rich and complicated,” says Avsec. “By using neural networks, we can train much more flexible and nuanced models that learn complex patterns from scratch without previous knowledge, thereby allowing novel discoveries.“

BPNet’s network architecture is similar to that of neural networks used for facial recognition in images. For instance, the neural network first detects edges in the pixels, then learns how edges form facial elements like the eye, nose, or mouth, and finally detects how facial elements together form a face. Instead of learning from pixels, BPNet learns from the raw DNA sequence and learns to detect sequence motifs and eventually the higher-order rules by which the elements predict the base-resolution binding data.

Once the model is trained to be highly accurate, the learned patterns are extracted with interpretation tools. The output signal is traced back to the input sequences to reveal sequence motifs. The final step is to use the model as an oracle and systematically query it with specific DNA sequence designs, similar to what one would do to test hypotheses experimentally, to reveal the rules by which sequence motifs function in a combinatorial manner.

“The beauty is that the model can predict way more sequence designs that we could test experimentally,” Zeitlinger says. “Furthermore, by predicting the outcome of experimental perturbations, we can identify the experiments that are most informative to validate the model.” Indeed, with the help of CRISPR gene editing techniques, the researchers confirmed experimentally that the model’s predictions were highly accurate.

Since the approach is flexible and applicable to a variety of different data types and cell types, it promises to lead to a rapidly growing understanding of the regulatory code and how genetic variation impacts gene regulation. Both the Zeitlinger Lab and the Kundaje Lab are already using BPNet to reliably identify binding motifs for other cell types, relate motifs to biophysical parameters, and learn other structural features in the genome such as those associated with DNA packaging. To enable other scientists to use BPNet and adapt it for their own needs, the researchers have made the entire software framework available with documentation and tutorials.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2021 13:13:23
In regards to a virtual universe I consider that would be the smallest element of possible information , a tuple .
AFAIK, the smallest unit of information is a bit, or binary digit, which is supposed to reduce uncertainty by half.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2021 13:55:32
Graph databases: The best kept secret for effective AI
Quote
Emil Eifrem, Neo4j Co-Founder and CEO explains why connected data is the key to more accurate, efficient and credible learning systems. Using real world use cases ranging from space engineering to investigative journalism, he will outline how a relationships-first approach adds context to data - the key to explainable, well-informed predictions.
What I've tried to do previously was basically creating a graph database using a standard relational database system. If only I knew this earlier, I might have saved significant amount of my time and effort. It makes me feel like I tried to reinvent the wheel.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2021 14:11:48
What Is Edge Computing?
Quote
Another jargon busting video - Here I explain in simple terms what edge computing or sometimes called fog computing is. I provide practical examples of computing at the edge of the network - in phones, cameras, etc.
In the future, human brains will be part of edge computing network, which itself is part of universal consciousness running the virtual universe. No single human individual has the capability of running the kernel and core processes of the virtual universe which would run on cloud computing servers due to sheer data size and parallel data processing power requirement. To make significant contributions, we would have to establish direct communication interface with computer to increase data exchange rate, which would break the natural limit of biomechanical systems currently used, such as typing, hand gesture, reading, hearing, or voice commands.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2021 07:01:35
A Practical Guide to Graph Databases - David Bechberger
Quote
With the emergence of offerings on both AWS (Neptune) and Azure (CosmosDB) within the past year it is fair to say that graph databases are of the hottest trends and that they are here to stay. So what are graph databases all about then? You can read article after article about how great they are and that they will solve all your problems better than your relational database but its difficult to really find any practical information about them.
This talk will start with a short primer on graph databases and the ecosystem but will then quickly transition to discussing the practical aspects of how to apply them to solve real world business problems. We will dive into what makes a good use case and what does not. We will then follow this up with some real world examples of some of the common patterns and anti-patterns of using graph databases. If you haven't been scared away by this point we will end by showing you some of the powerful insights that graph databases can provide you.
I wish I knew this back then so I can save my time trying to emulate a graph database using traditional relational database.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2021 06:43:26
Quote
Edge computing places workloads closer to where data is created and where actions need to be taken. It addresses the unprecedented scale and complexity of data created by connected devices. As more and more data comes in from remote IoT edge devices and servers, it’s important to act on the data quickly. Acting quickly can help companies seize new business opportunities, increase operational efficiency and improve customer experiences.

In this video, Rob High, IBM Fellow and CTO, provides insights into the basic concepts and key use cases of edge computing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 11:12:57
https://singularityhub.com/2021/04/04/openais-gpt-3-algorithm-is-now-producing-billions-of-words-a-day/

Quote
When OpenAI released its huge natural-language algorithm GPT-3 last summer, jaws dropped. Coders and developers with special access to an early API rapidly discovered new (and unexpected) things GPT-3 could do with naught but a prompt. It wrote passable poetry, produced decent code, calculated simple sums, and with some edits, penned news articles.

All this, it turns out, was just the beginning. In a recent blog post update, OpenAI said that tens of thousands of developers are now making apps on the GPT-3 platform.

Over 300 apps (and counting) use GPT-3, and the algorithm is generating 4.5 billion words a day for them.
Quote
The Coming Torrent of Algorithmic Content
Each month, users publish about 70 million posts on WordPress, which is, hands down, the dominant content management system online.

Assuming an average article is 800 words long—which is speculation on my part, but not super long or short—people are churning out some 56 billion words a month or 1.8 billion words a day on WordPress.

If our average word count assumption is in the ballpark, then GPT-3 is producing over twice the daily word count of WordPress posts. Even if you make the average more like 2,000 words per article (which seems high to me) the two are roughly equivalent.

Now, not every word GPT-3 produces is a word worth reading, and it’s not necessarily producing blog posts (more on applications below). But in either case, just nine months in, GPT-3’s output seems to foreshadow a looming torrent of algorithmic content.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 13:06:06
https://siliconangle.com/2021/04/10/new-era-innovation-moores-law-not-dead-ai-ready-explode/
Quote
Processing goes to the edge – networks and storage become the bottlenecks
We recently reported Microsoft Corp. Chief Executive Satya Nadella’s epic quote that we’ve reached peak centralization. The graphic below paints a picture that is telling. We just shared above that processing power is accelerating at unprecedented rates. And costs are dropping like a rock. Apple’s A14 costs the company $50 per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators that will optimize energy use and save 10% annually on power consumption. They said that chip will cost $1 — a buck to shave 10% off your electricity bill from the fridge.
(https://d2axcg2cspgbkk.cloudfront.net/wp-content/uploads/Breaking-Analysis_-Moores-Law-is-Accelerating-and-AI-is-Ready-to-Explode-3.jpg)
Quote
Processing is plentiful and cheap. But look at where the expensive bottlenecks are: networks and storage. So what does this mean?

It means that processing is going to get pushed to the edge – wherever the data is born. Storage and networking will become increasingly distributed and decentralized. With custom silicon and processing power placed throughout the system with AI embedded to optimize workloads for latency, performance, bandwidth, security and other dimensions of value.

And remember, most of the data – 99% – will stay at the edge. We like to use Tesla Inc. as an example. The vast majority of data a Tesla car creates will never go back to the cloud. It doesn’t even get persisted. Tesla saves perhaps five minutes of data. But some data will connect occasionally back to the cloud to train AI models – we’ll come back to that.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/04/2021 13:08:21
Quote
Massive increases in processing power and cheap silicon will power the next wave of AI, machine intelligence, machine learning and deep learning.
Quote
We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this:

There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor.

It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time.

Deep learning is a more advanced type of machine learning that uses more complex math.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2021 12:20:28

https://pub.towardsai.net/openai-brings-introspection-to-reinforcement-learning-agents-39cbe4cf2af3

Quote
OpenAI Brings Introspection to Reinforcement Learning Agents
The research around Evolved Policy Gradients attempts to recreate introspection in reinforcement learning models.

Introspection is one of those magical cognitive abilities that differentiate humans from other species. Conceptually, introspection can be defined as the ability to examine conscious thoughts and feelings. Introspection also plays a pivotal role in how humans learn. Have you ever tried to self-learn a new skill such as learning a new language? Even without any external feedback, you can quickly assess whether you are making progress on aspects such as vocabulary or pronunciation. Wouldn’t it be great if we could apply some of the principles of introspection to artificial intelligence(AI) discplines such as reinforcement learning (RL)?
The magic of introspection comes from the fact that humans have access to very well shaped internal reward functions, derived from prior experience on other tasks, and through the course of biological evolution. That model highly contrasts with RL agents that are fundamentally coded to start from scratch on any learning task relying mainly on external feedback. Not surprisingly, most RL models take substantially more time than humans to learn similar tasks. Recently, researchers from OpenAI published a new paper that proposes a method to address this challenge by creating RL models that know what it means to make progress on a new task, by having experienced making progress on similar tasks in the past.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2021 13:53:38
How Graph Technology Is Changing Artificial Intelligence and Machine Learning


Quote
Graph enhancements to Artificial Intelligence and Machine Learning are changing the landscape of intelligent applications. Beyond improving accuracy and modeling speed, graph technologies make building AI solutions more accessible. Join us to hear about 6 areas at the forefront of graph enhanced AI and ML, and find out which techniques are commonly used today and which hold the potential for disrupting industries.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2021 14:37:41

Edge computing definitions and concepts. This non-technical video focuses on edge computing and cloud computing, as well as edge computing and the deployment of vision recognition and other AI applications. Also introduced are mesh networks, SBC (single board computer) edge hardware, and fog computing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2021 21:01:23
https://syncedreview.com/2021/04/07/deepmind-microsoft-allen-ai-uw-researchers-convert-pretrained-transformers-into-rnns-lowering-memory-cost-while-retaining-high-accuracy/
Quote
Powerful transformer models have been widely used in autoregressive generation, where they have advanced the state-of-the-art beyond recurrent neural networks (RNNs). However, because the output words for these models are incrementally predicted conditioned on the prefix, the generation requires quadratic time complexity with regard to sequence length.

As the performance of transformer models increasingly relies on large-scale pretrained transformers, this long sequence generation issue has become increasingly problematic. To address this, a research team from the University of Washington, Microsoft, DeepMind and Allen Institute for AI have developed a method to convert a pretrained transformer into an efficient RNN. Their Transformer-to-RNN (T2R) approach speeds up generation and reduces memory cost.
Quote
Overall, the results validated that T2R achieves efficient autoregressive generation while retaining high accuracy, proving that large-scale pretrained models can be compressed into efficient inference models that facilitate downstream applications.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/04/2021 13:37:43
https://techxplore.com/news/2021-04-deep-learning-code-humans.html
Toward deep-learning models that can reason about code more like humans
Quote
Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that's driving the push to automate some of the easier tasks that take up their time.
Quote
A machine capable of programming itself once seemed like science fiction. But an exponential rise in computing power, advances in natural language processing, and a glut of free code on the internet have made it possible to automate at least some aspects of software design.
Trained on GitHub and other program-sharing websites, code-processing models learn to generate programs just as other language models learn to write news stories or poetry. This allows them to act as a smart assistant, predicting what software developers will do next, and offering an assist. They might suggest programs that fit the task at hand, or generate program summaries to document how the software works. Code-processing models can also be trained to find and fix bugs. But despite their potential to boost productivity and improve software quality, they pose security risks that researchers are just starting to uncover.
Quote
"Our framework for attacking the model, and retraining it on those particular exploits, could potentially help code-processing models get a better grasp of the program's intent," says Liu, co-senior author of the study. "That's an exciting direction waiting to be explored."

In the background, a larger question remains: what exactly are these black-box deep-learning models learning? "Do they reason about code the way humans do, and if not, how can we make them?" says O'Reilly. "That's the grand challenge ahead for us."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/04/2021 12:10:32
Top Use Cases of Graph Databases
Quote
Jonny Cheetham, Sales Director: Graph databases are a rising tide in the world of big data insights, and the enterprises that tap into their power realize significant competitive advantages.
So how might your enterprise leverage graph databases to generate competitive insights and derive significant business value from your connected data? This webinar will show you the top five most impactful and profitable use cases of graph databases.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/04/2021 08:03:07
Do Neural Networks Think Like Our Brain? OpenAI Answers!
https://openai.com/blog/multimodal-neurons/
Quote
Multimodal Neurons in Artificial Neural Networks
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.

Quote
Our discovery of multimodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems—abstraction. We discover that the highest layers of CLIP organize images as a loose semantic collection of ideas, providing a simple explanation for both the model’s versatility and the representation’s compactness.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/04/2021 05:28:20
3D deep neural network precisely reconstructs freely-behaving animal's movements
Quote
Animals are constantly moving and behaving in response to instructions from the brain. But while there are advanced techniques for measuring these instructions in terms of neural activity, there is a paucity of techniques for quantifying the behavior itself in freely moving animals. This inability to measure the key output of the brain limits our understanding of the nervous system and how it changes in disease.

A new study by researchers at Duke University and Harvard University introduces an automated tool that can readily capture behavior of freely behaving animals and precisely reconstruct their three dimensional (3D) pose from a single video camera and without markers.

The April 19 study in Nature Methods led by Timothy W. Dunn, Assistant Professor, Duke University, and Jesse D. Marshall, postdoctoral researcher, Harvard University, describes a new 3D deep-neural network, DANNCE (3-Dimensional Aligned Neural Network for Computational Ethology). The study follows the team's 2020 study in Neuron which revealed the groundbreaking behavioral monitoring system, CAPTURE (Continuous Appendicular and Postural Tracking using Retroreflector Embedding), which uses motion capture and deep learning to continuously track the 3D movements of freely behaving animals. CAPTURE yielded an unprecedented detailed description of how animals behave. However, it required using specialized hardware and attaching markers to animals, making it a challenge to use.

"With DANNCE we relieve this requirement," said Dunn. "DANNCE can learn to track body parts even when they can't be seen, and this increases the types of environments in which the technique can be used. We need this invariance and flexibility to measure movements in naturalistic environments more likely to elicit the full and complex behavioral repertoire of these animals."

DANNCE works across a broad range of species and is reproducible across laboratories and environments, ensuring it will have a broad impact on animal—and even human—behavioral studies. It has a specialized neural network tailored to 3D pose tracking from video. A key aspect is that its 3D feature space is in physical units (meters) rather than camera pixels. This allows the tool to more readily generalize across different camera arrangements and laboratories. In contrast, previous approaches to 3D pose tracking used neural networks tailored to pose detection in two-dimensions (2D), which struggled to readily adapt to new 3D viewpoints.

https://techxplore.com/news/2021-04-3d-deep-neural-network-precisely.html

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/04/2021 14:46:49
Do Neural Networks Think Like Our Brain? OpenAI Answers!
https://openai.com/blog/multimodal-neurons/
Some of new AI models are getting closer to human intelligence. It's shown that they make similar types of mistakes in visual classifications. Previously, other AI models made mistakes that no human will ever make, which means that their working principles are significantly different. So it's clearly a progress which seems to make Ray Kurzweil's predictions about human level intelligence AI in 2029 more plausible.
Previously, other AI researchers predicted that Conquering Go would take 100 years. It's proven false by AlphaGo. That prediction was a product of linear thinking, which grossly deviates from real technological advancements that look more like exponential or even double exponential curve.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/04/2021 09:54:50
https://www.nextplatform.com/2021/04/22/vertically-unchallenged/
Quote
Components make compute and storage servers, and servers with application plane, control plane, and data plane software running atop them or alongside them make systems, and workflows across systems make platforms. The end state goal of any system architect is really creating a platform. If you don’t have an integrated platform, then what you have is an IT nightmare.

That is what four decades of distributed computing has really taught us, if you boil off all the pretty water that obscures with diffraction and bubbling and look very hard at the bottom of the pot into the substrate of bullshit left behind.

Maybe we should have something called a platform architect? And maybe they don’t have those titles at the big hyperscalers and public cloud builders, but that is, in fact, what these companies are doing. And for those of us who have been around for a while, it is with a certain amount of humor that we are seeing the rise of the most vertically integrated, proprietary platforms that the world has seen since the IBM System/360 mainframe and the DEC VAX, IBM AS/400, and HP 3000 – there was no “E” back then – minicomputers in the 1960s and the 1970s.
The vision of integrated system has been around for decades now. And it will improve further for decades to come.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/04/2021 22:40:32
Quote
We are starting to see more exascale and large supercomputing sites benchmark and project on deep learning capabilities of systems designed for HPC applications but only a few have run system-wide tests to see how their machines might stack up against standard CNN and other metrics.

In China, however, we finally have some results about the potential for leadership-class systems to tackle deep learning. That is interesting in itself, but in the case of AI benchmarks on the Tianhe-3 exascale prototype supercomputer, we also get a sense of how that system’s unique Arm-based architecture performs for math that is quite different than that required for HPC modeling/simulation.
Quote
It is hard to tell what to expect from this novel architecture in terms of AI workloads but for us, the news is that the system is operational and teams are at least exploring what might be possible in scaling deep learning using an Arm-based architecture and unique interconnect. It also shows that there is still work to be done to optimize Arm-based processors for even routine AI benchmarks to keep pace with other companies with CPUs and accelerators.
http://www.nextplatform.com/2021/04/19/chinas-exascale-prototype-supercomputer-tests-ai-workloads/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/04/2021 21:09:12
Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’
Quote
Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.

How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.

Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.

Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.

The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.

Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.

The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.

https://singularityhub.com/2021/04/26/the-next-supercomputer-a-blueprint-for-an-optoelectronic-brain/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/04/2021 07:33:10
https://www.nature.com/articles/d41586-021-00530-0
Robo-writers: the rise and risks of language-generating AI
A remarkable AI can write like humans — but with no understanding of what it’s saying.
Quote
In June 2020, a new and powerful artificial intelligence (AI) began dazzling technologists in Silicon Valley. Called GPT-3 and created by the research firm OpenAI in San Francisco, California, it was the latest and most powerful in a series of ‘large language models’: AIs that generate fluent streams of text after imbibing billions of words from books, articles and websites. GPT-3 had been trained on around 200 billion words, at an estimated cost of tens of millions of dollars.

The developers who were invited to try out GPT-3 were astonished. “I have to say I’m blown away,” wrote Arram Sabeti, founder of a technology start-up who is based in Silicon Valley. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”

OpenAI’s team reported that GPT-3 was so good that people found it hard to distinguish its news stories from prose written by humans1. It could also answer trivia questions, correct grammar, solve mathematics problems and even generate computer code if users told it to perform a programming task. Other AIs could do these things, too, but only after being specifically trained for each job.

Large language models are already business propositions. Google uses them to improve its search results and language translation; Facebook, Microsoft and Nvidia are among other tech firms that make them. OpenAI keeps GPT-3’s code secret and offers access to it as a commercial service. (OpenAI is legally a non-profit company, but in 2019 it created a for-profit subentity called OpenAI LP and partnered with Microsoft, which invested a reported US$1 billion in the firm.) Developers are now testing GPT-3’s ability to summarize legal documents, suggest answers to customer-service enquiries, propose computer code, run text-based role-playing games or even identify at-risk individuals in a peer-support community by labelling posts as cries for help.

(https://media.nature.com/lw800/magazine-assets/d41586-021-00530-0/d41586-021-00530-0_18907396.png)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/04/2021 14:23:58
https://www.nature.com/articles/s41586-021-03451-0
Quote
Towards complete and error-free genome assemblies of all vertebrate species

High-quality and complete reference genome assemblies are fundamental for the application of genomics to biology, disease, and biodiversity conservation. However, such assemblies are available for only a few non-microbial species1,2,3,4. To address this issue, the international Genome 10K (G10K) consortium5,6 has worked over a five-year period to evaluate and develop cost-effective methods for assembling highly accurate and nearly complete reference genomes. Here we present lessons learned from generating assemblies for 16 species that represent six major vertebrate lineages. We confirm that long-read sequencing technologies are essential for maximizing genome quality, and that unresolved complex repeats and haplotype heterozygosity are major sources of assembly error when not handled correctly. Our assemblies correct substantial errors, add missing sequence in some of the best historical reference genomes, and reveal biological discoveries. These include the identification of many false gene duplications, increases in gene sizes, chromosome rearrangements that are specific to lineages, a repeated independent chromosome breakpoint in bat genomes, and a canonical GC-rich pattern in protein-coding genes and their regulatory regions. Adopting these lessons, we have embarked on the Vertebrate Genomes Project (VGP), an international effort to generate high-quality, complete reference genomes for all of the roughly 70,000 extant vertebrate species and to help to enable a new era of discovery across the life sciences.
Quote
The Vertebrate Genomes Project
Building on this initial set of assembled genomes and the lessons learned, we propose to expand the VGP to deeper taxonomic phases, beginning with phase 1: representatives of approximately 260 vertebrate orders, defined here as lineages separated by 50 million or more years of divergence from each other. Phase 2 will encompass species that represent all approximately 1,000 vertebrate families; phase 3, all roughly 10,000 genera; and phase 4, nearly all 71,657 extant named vertebrate species (Supplementary Note 5, Supplementary Fig. 3). To accomplish such a project within 10 years, we will need to scale up to completing 125 genomes per week, without sacrificing quality. This includes sample permitting, high molecular weight DNA extractions, sequencing, meta-data tracking, and computational infrastructure. We will take advantage of continuing improvements in genome sequencing technology, assembly, and annotation, including advances in PacBio HiFi reads, Oxford Nanopore reads, and replacements for 10XG reads (Supplementary Note 6), while addressing specific scientific questions at increasing levels of phylogenetic refinement. Genomic technology advances quickly, but we believe the principles of our pipeline and the lessons learned will be applicable to future efforts. Areas in which improvement is needed include more accurate and complete haplotype phasing, base-call accuracy, and resolution of long repetitive regions such as telomeres, centromeres, and sex chromosomes. The VGP is working towards these goals and making all data, protocols, and pipelines openly available (Supplementary Notes 5, 7).

Despite remaining imperfections, our reference genomes are the most complete and highest quality to date for each species sequenced, to our knowledge. When we began to generate genomes beyond the Anna’s hummingbird in 2017, only eight vertebrate species in GenBank had genomes that met our target continuity metrics, and none were haplotype phased (Supplementary Table 23). The VGP pipeline introduced here has now been used to complete assemblies of more than 130 species of similar or higher quality (Supplementary Note 5; BioProject PRJNA489243). We encourage the scientific community to use and evaluate the assemblies and associated raw data, and to provide feedback towards improving all processes for complete and error-free assembled genomes of all species.
It seems like in the future we don't need zoos filled with captivated animals just to preserve biodiversity.  However, genetic information alone is not enough to reproduce fully functional organisms. Compatible epigenetic environments are necessary. An embryo of tiger inside a chicken egg is unlikely to grow into baby tiger.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/04/2021 23:44:53
The least a human individual can contribute to the society without doing anything is to provide backup of genetic and epigenetic information, which also gives biodiversity. This contribution is insignificant when there are billions of people, but would be important when there are only few left.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/05/2021 07:50:42
Elon Musk (@elonmusk) tweeted at 5:45 AM on Fri, Apr 30, 2021:
Quote
A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers
But it may not be the case anymore in the future. At least there are 2 things can change that.
When most vehicles are already autonomous.
When VTOL flying cars become abundant, which would make roads irrelevant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/05/2021 05:33:00
I'd share a recent newsfeed from my e-mail. It seems similar to how brains have evolved.

MorphNet is a Google Model to Build Faster and Smaller Neural Networks
The model makes inroads in the optimization of the architecture of neural networks.

Quote
Designing deep neural networks these days is more art than science. In the deep learning space, any given problem can be addressed with a fairly large number of neural network architectures. In that sense, designing a deep neural network from the ground up for a given problem can result incredibly expensive in terms of time and computational resources. Additionally, given the lack of guidance in the space, we often end up producing neural network architectures that are suboptimal for the task at hand. About two years ago, artificial intelligence(AI) researchers from Google published a paper proposing a method called MorphNet to optimize the design of deep neural networks.
Quote
Automated neural network design is one of the most active areas of research in the deep learning space. The most traditional approach to neural network architecture design involves sparse regularizers using methods such as L1. While this technique has proven effective on reducing the number of connections in a neural network, quite often ends up producing suboptimal architectures. Another approach involves using search techniques to find an optimal neural network architecture for a given problem. That method has been able to generate highly optimized neural network architectures but it requires an exorbitant number of trial and error attempts which often results computationally prohibited. As a result, neural network architecture search has only proven effective in very specialized scenarios. Factoring the limitations of the previous methods, we can arrive to three key characteristics of effective automated neural network design techniques:
a) Scalability: The automated design approach should be scalable to large datasets and models.
b) Multi-Factor Optimization: An automated method should be able to optimized the structure of a deep neural network targeting specific resources.
c) Optimal: An automated neural network design should produce an architecture that improves performance while reducing the usage of the target resource.

Quote
MorphNet
Google’s MorphNet approaches the problem of automated neural network architecture design from a slightly different angle. Instead of trying to try numerous architectures across a large design space, MorphNet start with an existing architecture for a similar problem and, in one shot, optimize it for the task at hand.
MorphNet optimizes a deep neural network by interactively shrinking and expanding its structure. In the shrinking phase, MorphNet identifies inefficient neurons and prunes them from the network by applying a sparsifying regularizer such that the total loss function of the network includes a cost for each neuron. Just doing this typically results on a neural network that consumes less of the targeted resource, but typically achieves a lower performance. However, MorphNet applies a specific shrinking model that not only highlights which layers of a neural network are over-parameterized, but also which layers are bottlenecked. Instead of applying a uniform cost per neuron, MorphNet calculates a neuron cost with respect to the targeted resource. As training progresses, the optimizer is aware of the resource cost when calculating gradients, and thus learns which neurons are resource-efficient and which can be removed.
https://medium.com/@jrodthoughts/morphnet-is-a-google-model-to-build-faster-and-smaller-neural-networks-f890276da456
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/05/2021 09:49:59
DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
Quote
Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features without any labels. Facebook AI's new system, DINO, combines advances in Self-Supervised Learning for Computer Vision with the new Vision Transformer (ViT) architecture and achieves impressive results without any labels. Attention maps can be directly interpreted as segmentation maps, and the obtained representations can be used for image retrieval and zero-shot k-nearest neighbor classifiers (KNNs).

OUTLINE:
0:00​ - Intro & Overview
6:20​ - Vision Transformers
9:20​ - Self-Supervised Learning for Images
13:30​ - Self-Distillation
15:20​ - Building the teacher from the student by moving average
16:45​ - DINO Pseudocode
23:10​ - Why Cross-Entropy Loss?
28:20​ - Experimental Results
33:40​ - My Hypothesis why this works
38:45​ - Conclusion & Comments
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/05/2021 13:08:47
https://theconversation.com/engineered-viruses-can-fight-the-rise-of-antibiotic-resistant-bacteria-154337
Quote
As the world fights the SARS-CoV-2 virus causing the COVID-19 pandemic, another group of dangerous pathogens looms in the background. The threat of antibiotic-resistant bacteria has been growing for years and appears to be getting worse. If COVID-19 taught us one thing, it’s that governments should be prepared for more global public health crises, and that includes finding new ways to combat rogue bacteria that are becoming resistant to commonly used drugs.

In contrast to the current pandemic, viruses may be be the heroes of the next epidemic rather than the villains. Scientists have shown that viruses could be great weapons against bacteria that are resistant to antibiotics.
Quote
Since the discovery of penicillin in 1928, antibiotics have changed modern medicine. These small molecules fight off bacterial infections by killing or inhibiting the growth of bacteria. The mid-20th century was called the Golden Age for antibiotics, a time when scientists were discovering dozens of new molecules for many diseases.

This high was soon followed by a devastating low. Researchers saw that many bacteria were evolving resistance to antibiotics. Bacteria in our bodies were learning to evade medicine by evolving and mutating to the point that antibiotics no longer worked.

As an alternative to antibiotics, some researchers are turning to a natural enemy of bacteria: bacteriophages. Bacteriophages are viruses that infect bacteria. They outnumber bacteria 10 to 1 and are considered the most abundant organisms on the planet.

Bacteriophages, also known as phages, survive by infecting bacteria, replicating and bursting out from their host, which destroys the bacterium.

Harnessing the power of phages to fight bacteria isn’t a new idea. In fact, the first recorded use of so-called phage therapy was over a century ago. In 1919, French microbiologist Félix d'Hérelle used a cocktail of phages to treat children suffering from severe dysentery.

D'Hérelle’s actions weren’t an accident. In fact, he is credited with co-discovering phages, and he pioneered the idea of using bacteria’s natural enemies in medicine. He would go on to stop cholera outbreaks in India and plague in Egypt.

Phage therapy is not a standard treatment you can find in your local hospital today. But excitement about phages has grown over the past few years. In particular, scientists are using new knowledge about the complex relationship between phages and bacteria to improve phage therapy. By engineering phages to better target and destroy bacteria, scientists hope to overcome antibiotic resistance.
Quote
Now scientists are hoping to use the knowledge about CRISPR systems to engineer phages to destroy dangerous bacteria.

When the engineered phage locates specific bacteria, the phage injects CRISPR proteins inside the bacteria, cutting and destroying the microbes’ DNA. Scientists have found a way to turn defense into offense. The proteins normally involved in protecting against viruses are repurposed to target and destroy the bacteria’s own DNA. The scientists can specifically target the DNA that makes the bacteria resistant to antibiotics, making this type of phage therapy extremely effective.
Quote
Science is only half of the solution when it comes to fighting these microbes. Commercialization and regulation are important to ensure that this technology is in society’s toolkit for fending off a worldwide spread of antibiotic-resistant bacteria.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/05/2021 14:58:08
https://neurosciencenews.com/3d-neuroimaging-18378/
New Imaging Technique Captures How Brain Moves in Stunning Detail
Quote
Summary: A new neuroimaging technique captures the brain in motion in real-time, generating a 3D view and with improved detail. The new technology could help clinicians to spot hard-to-detect neurological conditions.

Source: Stevens Institute of Technology
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/05/2021 05:53:14
How Close Are We to Harnessing Synthetic Life?

Quote
Scientists are exploring how to edit genomes and even create brand new ones that never existed before, but how close are we to harnessing synthetic life?

Scientists have made major strides when it comes to understanding the base code that underlies all living things—but what if we could program living cells like software?

The principle behind synthetic biology, the emerging study of building living systems, lies in this ability to synthesize life. An ability to create animal products, individualized medical therapies, and even transplantable organs, all starting with synthetic DNA and cells in a lab.

There are two main schools of thought when it comes to synthesizing life: building artificial cells from the bottom-up or engineering microorganisms so significantly that it resynthesizes and redesigns the genome.

With genetic engineering tools becoming more and more accessible, researchers want to use these synthesized genomes to enhance human health with regards to things like detecting infections or environmental pollutants. Bacterial cells can be engineered that will detect toxic chemicals.

And these synthesized bacteria could potentially protect us from, for example, consuming toxins in contaminated water.

The world of synthetic biology goes beyond human health though, it can be used in a variety of industries, including fashion. Researchers hope to come up with lab-made versions of materials like leather or silk.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/05/2021 05:58:24
It's Alive, But Is It Life: Synthetic Biology and the Future of Creation
Quote
For decades, biologists have read and edited DNA, the code of life. Revolutionary developments are giving scientists the power to write it. Instead of tinkering with existing life forms, synthetic biologists may be on the verge of writing the DNA of a living organism from scratch. In the next decade, according to some, we may even see the first synthetic human genome. Join a distinguished group of synthetic biologists, geneticists and bioengineers who are edging closer to breathing life into matter.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Original Program Date: June 4, 2016
MODERATOR: Robert Krulwich
PARTICIPANTS: George Church, Drew Endy, Tom Knight, Pamela Silver
Quote
Synthetic Biology and the Future of Creation 00:00​

Participant Intros 3:25​

Ordering DNA from the internet 8:10​
 
How much does it cost to make a synthetic human? 13:04​

Why is yeast the best catalyst 20:10​

How George Church printed 90 billion copies of his book 26:05​

Creating synthetic rose oil 28:35​

Safety engineering and synthetic biology 37:15​

Do we want to be invaded by bad bacteria? 45:26​

Do you need a human gene's to create human cells? 55:09​

The standard of DNA sequencing in utero 1:02:27​

The science community is divided by closed press meetings 1:11:30​

The Human Genome Project. What is it? 1:21:45​
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/05/2021 05:39:05
DeepMind Wants to Reimagine One of the Most Important Algorithms in Machine Learning.

In one of the most important papers this year, DeepMind proposed a multi-agent structure to redefine PCA.
Quote
Principal component analysis(PCA) is one of the key algorithms that are part of any machine learning curriculum. Initially created in the early 1900s, PCA is a fundamental algorithm to understand data in high-dimensional spaces which are common in deep learning problems. More than a century after its invention, PCA is such a key part of modern deep learning frameworks that very few question it there could be a better approach. Just a few days ago, DeepMind published a fascinating paper that looks to redefine PCA as a competitive multi-agent game called EigenGame.

Titled “EigenGame: PCA as a Nash Equilibrium”, the DeepMind work is one of those papers that you can’t resist to read just based on the title. Redefining PCA sounds ludicrous. And yet, DeepMind’s thesis makes perfect sense the minute you deep dive into it.

In recent years, PCA techniques have hit a bottleneck in large scale deep learning scenarios. Originally designed for mechanical devices, traditional PCA is formulated as an optimization problem which is hard to scale across large computational clusters. A multi-agent approach to PCA might be able to leverage vast computational resources and produce better optimizations in modern dep learning problems.
https://medium.com/@jrodthoughts/deepmind-wants-to-reimagine-one-of-the-most-important-algorithms-in-machine-learning-381884d42de
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/05/2021 12:55:16
Quote
The advent of Transformers in 2017 completely changed the world of neural networks. Ever since, the core concept of Transformers has been remixed, repackaged, and rebundled in several models. The results have surpassed the state of the art in several machine learning benchmarks. In fact, currently all top benchmarks in the field of natural language processing are dominated by Transformer-based models. Some of the Transformer-family models are BERT, ALBERT, and the GPT series of models.

In any machine learning model, the most important components of the training process are:
The code of the model — the components of the model and its configuration
The data to be used for training
The available compute power
With the Transformer family of models, researchers finally arrived at a way to increase the performance of a model infinitely: You just increase the amount of training data and compute power.

This is exactly what OpenAI did, first with GPT-2 and then with GPT-3. Being a well funded ($1 billion+) company, it could afford to train some of the biggest models in the world. A private corpus of 500 billion tokens was used for training the model, and approximately $50 million was spent in compute costs.

While the code for most of the GPT language models is open source, the model is impossible to replicate without the massive amounts of data and compute power. And OpenAI has chosen to withhold public access to its trained models, making them available via API to only a select few companies and individuals. Further, its access policy is undocumented, arbitrary, and opaque.

https://venturebeat.com/2021/05/15/gpt-3s-free-alternative-gpt-neo-is-something-to-be-excited-about/
Quote
The bottom line here is: GPT-Neo is a great open source alternative to GPT-3, especially given OpenAI’s closed access policy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/05/2021 12:03:57
https://bdtechtalks.com/2021/05/17/ibms-codenet-machine-learning-programming/
IBM’s Project CodeNet will test how far you can push AI to write software
Quote
IBM’s AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks. Called Project CodeNet, the dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning.

While there’s a scant chance that machine learning models built on the CodeNet dataset will make human programmers redundant, there’s reason to be hopeful that they will make developers more productive.
Quote
With Project CodeNet, the researchers at IBM have tried to create a multi-purpose dataset that can be used to train machine learning models for various tasks. CodeNet’s creators describe it as a “very large scale, diverse, and high-quality dataset to accelerate the algorithmic advances in AI for Code.”

The dataset contains 14 million code samples with 500 million lines of code written in 55 different programming languages. The code samples have been obtained from submissions to nearly 4,000 challenges posted on online coding platforms AIZU and AtCoder. The code samples include both correct and incorrect answers to the challenges.

One of the key features of CodeNet is the amount of annotation that has been added to the examples. Every one of the coding challenges included in the dataset has a textual description along with CPU time and memory limits. Every code submission has a dozen pieces of information, including the language, the date of submission, size, execution time, acceptance, and error types.

The researchers at IBM have also gone through great effort to make sure the dataset is balanced along different dimensions, including programming language, acceptance, and error types.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/05/2021 07:27:49
https://bdtechtalks.com/2021/05/13/machine-learning-dimensionality-reduction/
Machine learning: What is dimensionality reduction?
Quote
Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

Measuring a general intelligence and general consciousness are examples of Dimensionality reduction used to reduce multiple parameters into a single number.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:37:15
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries | WIRED Live
Quote
Deepmind, Co-founder and CEO, Demis Hassabis discusses how we can avoid bias being built into AI systems and what's next for DeepMind, including the future of protein folding, at WIRED Live 2020.

"If we build it right, AI systems could be less biased than we are."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:42:53
https://www.newscientist.com/article/2268496-people-can-answer-questions-about-their-dreams-without-waking-up
Quote
Talking to people while they are asleep can influence their dreams – and in some cases, the dreamer can respond without waking up.

Ken Paller at Northwestern University in Evanston, Illinois, and his colleagues found that people could answer questions and even solve maths problems while lucid dreaming – a state that typically occurs during rapid eye-movement (REM) sleep when the dreamer is aware of being in a dream, and is sometimes able to control it.

“We asked questions where we knew the answer because what we wanted to do is determine whether we were having good communication. We had to know if they were answering correctly,” says Paller.

The team asked dreamers yes-no questions relating to their backgrounds and experiences, along with simple maths problems involving addition and subtraction. The dreamers weren’t aware of what questions they would be asked before they went to sleep.

The dreamers, who had a range of experience with lucid dreaming, answered the questions correctly 29 times, incorrectly five times, and ambiguously 28 times by twitching their face muscles or moving their eyes. They didn’t respond on 96 occasions.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:45:15
Inside Google’s DeepMind Project: How AI Is Learning On Its Own
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2021 04:53:56
The AI Hardware Problem
Quote
The millennia-old idea of expressing signals and data as a series of discrete states had ignited a revolution in the semiconductor industry during the second half of the 20th century. This new information age thrived on the robust and rapidly evolving field of digital electronics. The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures.

THE MAC
In a digital neural network implementation, the weights and input data are stored in system memory and must be fetched and stored continuously through the sea of multiple-accumulate operations within the network. This approach results in most of the power being dissipated in fetching and storing model parameters and input data to the arithmetic logic unit of the CPU, where the actual multiply-accumulate operation takes place. A typical multiply-accumulate operation within a general-purpose CPU consumes more than two orders of magnitude greater than the computation itself.

GPUs
Their ability to processes 3D graphics requires a larger number of arithmetic logic units coupled to high-speed memory interfaces. This characteristic inherently made them far more efficient and faster for machine learning by allowing hundreds of multiple-accumulate operations to process simultaneously. GPUs tend to utilize floating-point arithmetic, using 32 bits to represent a number by its mantissa, exponent, and sign. Because of this, GPU targeted machine learning applications have been forced to use floating-point numbers.

ASICS
These dedicated AI chips are offer dramatically larger amounts of data movement per joule when compared to GPUs and general-purpose CPUs. This came as a result of the discovery that with certain types of neural networks, the dramatic reduction in computational precision only reduced network accuracy by a small amount. It will soon become infeasible to increase the number of multiply-accumulate units integrated onto a chip, or reduce bit- precision further.

LOW POWER AI

Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power.

Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math.

These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network.

As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks.

If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 07:23:41
Microsoft's ZeRO-Infinity Library Trains 32 Trillion Parameter AI Model
https://www.infoq.com/news/2021/05/microsoft-zero-infinity/
Quote
Microsoft recently announced ZeRO-Infinity, an addition to their open-source DeepSpeed AI training library that optimizes memory use for training very large deep-learning models. Using ZeRO-Infinity, Microsoft trained a model with 32 trillion parameters on a cluster of 32 GPUs, and demonstrated fine-tuning of a 1 trillion parameter model on a single GPU.

The DeepSpeed team described the new features in a recent blog post. ZeRO-Infinity is the latest iteration of the Zero Redundancy Optimizer (ZeRO) family of memory optimization techniques. ZeRO-Infinity introduces several new strategies for addressing memory and bandwidth constraints when training large deep-learning models, including: a new offload engine for exploiting CPU and Non-Volatile Memory express (NVMe) memory, memory-centric tiling to handle large operators without model-parallelism, bandwidth-centric partitioning for reducing bandwidth costs, and an overlap-centric design for scheduling data communication. According to the DeepSpeed team:
Quote
The improved ZeRO-Infinity offers the system capability to go beyond the GPU memory wall and train models with tens of trillions of parameters, an order of magnitude bigger than state-of-the-art systems can support. It also offers a promising path toward training 100-trillion-parameter models.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 11:58:59
https://www.cnbc.com/2021/05/27/europeans-want-to-replace-lawmakers-with-ai.html
More than half of Europeans want to replace lawmakers with AI, study says

Quote
Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI’s clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

LONDON — A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University’s Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

IMO, politicians are more likely to sacrifice the best interests of their constituents to get their own best interest. While AI's decisions would depend on the terminal goal assigned to it, and the data fed into it. It makes alignment with the universal terminal goal a critical step in building an AI with such a huge power and responsibility.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2021 22:26:52
https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-19

Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs

Quote
Transformer architectures have come to dominate the natural language processing (NLP) field since their 2017 introduction. One of the only limitations to transformer application is the huge computational overhead of its key component — a self-attention mechanism that scales with quadratic complexity with regard to sequence length.

New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs.

Transformers’ self-attention mechanism enables inputs to be represented with higher-order units to flexibly capture diverse syntactic and semantic relationships in natural language. Researchers have long regarded the associated high complexity and memory footprint as an unavoidable trade-off on transformers’ impressive performance. But in the paper FNet: Mixing Tokens with Fourier Transforms, the Google team challenges this thinking with FNet, a novel model that strikes an excellent balance between speed, memory footprint and accuracy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/06/2021 12:21:21

https://bdtechtalks.com/2021/05/27/artificial-intelligence-neurons-assemblies/

A simple model of the brain provides new directions for AI research
Quote
Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.
Quote
The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.

A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “grandmother cell” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.

These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.”

Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.”
(https://i2.wp.com/bdtechtalks.com/wp-content/uploads/2021/05/brain-assemblies.jpg)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/06/2021 23:26:03
I bring the discussion here from my main thread to explore further into the detail.
Trial and error would be much cheaper, hence more efficient, if we could do it in a virtual environment, like computer simulation, if we can get it to be adequately accurate and precise in representing objective reality.

Adequately accurate and precise virtual representation of objective reality is what we commonly called knowledge. It's a form of data compression.
At the most fundamental level, knowledge consist of two types of data: nodes and edges. They are the data points and the relationship among them, respectively.

In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
In practice, we may think that we can make a statement precisely without leaving any uncertainty using finite bits of information. For example, x-x=0, and x/x=1, with seemingly 0 uncertainty.
On the other hand, to write ratio between circumference and diameter of a circle in decimal number accurately without uncertainty, infinite digits are required. What makes the difference here?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 02:49:39
https://en.wikipedia.org/wiki/Single_source_of_truth
Quote
In information systems design and theory, single source of truth (SSOT) is the practice of structuring information models and associated data schema such that every data element is mastered (or edited) in only one place. Any possible linkages to this data element (possibly in other areas of the relational schema or even in distant federated databases) are by reference only. Because all other locations of the data just refer back to the primary "source of truth" location, updates to the data element in the primary location propagate to the entire system without the possibility of a duplicate value somewhere being forgotten.

Deployment of an SSOT architecture is becoming increasingly important in enterprise settings where incorrectly linked duplicate or de-normalized data elements (a direct consequence of intentional or unintentional denormalization of any explicit data model) pose a risk for retrieval of outdated, and therefore incorrect, information. A common example would be the electronic health record, where it is imperative to accurately validate patient identity against a single referential repository, which serves as the SSOT. Duplicate representations of data within the enterprise would be implemented by the use of pointers rather than duplicate database tables, rows, or cells. This ensures that data updates to elements in the authoritative location are comprehensively distributed to all federated database constituencies in the larger overall enterprise architecture.
SSOT systems provide data that are authentic, relevant, and referable.

https://www.talend.com/resources/single-source-truth/
Quote
What is a single source of truth (SSOT)?
Single source of truth (SSOT) is a concept used to ensure that everyone in an organization bases business decisions on the same data. Creating a single source of truth is straightforward. To put an SSOT in place, an organization must provide relevant personnel with one source that stores the data points they need.

Data-driven decision making has placed never-before-seen levels of importance on collecting and analyzing data. While acting on data-derived business intelligence is essential for competitive brands today, companies often spend far too much time debating which numbers, invariably from different sources, are the right numbers to use. Metrics from social platforms may paint one picture of a company’s target demographics while vendor feedback or online questionnaires may say something entirely different. How are corporate leaders to decide whose data points to use in such a scenario?

Establishing a single source of truth eliminates this issue. Instead of debating which of many competing data sources should be used for making company decisions, everyone can use the same, unified source for all their data needs It provides data that can be used by anyone, in any way, across the entire organization.
Currently, effort to establish a single source of truth are becoming common in business organizations, as well as governments. But they are still limited for their internal usage, and seemingly independent from each other, although they share the same objective reality. When there are discrepancies, we would feel like there were alternative truth.
A common example I often see is  road closures by government which are not accurately represented in Google Maps.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 11:58:31
In practice, we may think that we can make a statement precisely without leaving any uncertainty using finite bits of information. For example, x-x=0, and x/x=1, with seemingly 0 uncertainty.
On the other hand, to write ratio between circumference and diameter of a circle in decimal number accurately without uncertainty, infinite digits are required. What makes the difference here?
Here is another example. We can say that the smallest prime number is 2, without leaving any uncertainty.
Square root of -1 is i
Speed of light through vacuum is 299792458 metres per second
We can also say that the ratio between circumference and diameter of a circle is π, with no uncertainty.
If someone says that a value equals e, we need more information as a context, whether it refers to Euler's number, or charge of electrons, or something else.
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2021 16:34:24
Calculating pi efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/06/2021 06:29:45
Quote
Sporting 1.75 trillion parameters, Wu Dao 2.0 is roughly ten times the size of Open AI's GPT-3.
https://www.engadget.com/chinas-gigantic-multi-modal-ai-is-no-one-trick-pony-211414388.html

Quote
When Open AI's GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

First off, Wu Dao is flat out enormous. It's been trained on 1.75 trillion parameters (essentially, the model's self-selected coefficients) which is a full ten times larger than the 175 billion GPT-3 was trained on and 150 billion parameters larger than Google's Switch Transformers.

With all that computing power comes a whole bunch of capabilities. Unlike most deep learning models which perform a single task — write copy, generate deep fakes, recognize faces, win at Go — Wu Dao is multi-modal, similar in theory to Facebook's anti-hatespeech AI or Google's recently released MUM. BAAI researchers demonstrated Wu Dao's abilities to perform natural language processing, text generation, image recognition, and image generation tasks during the lab's annual conference on Tuesday. The model can not only write essays, poems and couplets in traditional Chinese, it can both generate alt text based off of a static image and generate nearly photorealistic images based on natural language descriptions. Wu Dao also showed off its ability to power virtual idols (with a little help from Microsoft-spinoff XiaoIce) and predict the 3D structures of proteins like AlphaFold.

“The way to artificial general intelligence is big models and big computer,” Dr. Zhang Hongjiang, chairman of BAAI, said during the conference Tuesday. “What we are building is a power plant for the future of AI, with mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”
The article shows how close we are from building AGI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/06/2021 10:12:53
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Here is an example in our daily life. If I say to someone face to face that I found his ID card and I'm keeping it in the pocket of shirt that I'm wearing, he can quickly find it. But if I speak to someone over the phone, it won't be clear for him until he knows my location. The location can be stated as the name of the building, the address, or geographic coordinate as latitude and longitude. If I'm inside a tall building, the vertical position such as floor number or altitude is also necessary.
If I speak to an alien in another solar system, I would need to inform the position of planet earth and the sun. If the alien is from another galaxy, then I need to inform the position of the milky way too.
If I tell you that X=2Y, you get no new information until you can relate it to your preexisting knowledge.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 00:05:12
A common example I often see is  road closures by government which are not accurately represented in Google Maps.
Yesterday I came to a wedding party. The invitation contains a QR-code showing the location which can be traced in Google Maps. Due to traffic jam, it recommended to take an alternative route. I didn't expect that it brought us to cross a flooded road.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 00:17:37
Here is another picture from the front.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 07:13:29
Interpretable deep learning uncovers cellular properties in label-free live cell images that are predictive of highly metastatic melanoma.
Quote
Summary
Deep learning has emerged as the technique of choice for identifying hidden patterns in cell imaging data but is often criticized as “black box.” Here, we employ a generative neural network in combination with supervised machine learning to classify patient-derived melanoma xenografts as “efficient” or “inefficient” metastatic, validate predictions regarding melanoma cell lines with unknown metastatic efficiency in mouse xenografts, and use the network to generate in silico cell images that amplify the critical predictive cell properties. These exaggerated images unveiled pseudopodial extensions and increased light scattering as hallmark properties of metastatic cells. We validated this interpretation using live cells spontaneously transitioning between states indicative of low and high metastatic efficiency. This study illustrates how the application of artificial intelligence can support the identification of cellular properties that are predictive of complex phenotypes and integrated cell functions but are too subtle to be identified in the raw imagery by a human expert. A record of this paper’s transparent peer review process is included in the supplemental information.
https://www.sciencedirect.com/science/article/pii/S2405471221001587
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/06/2021 07:16:02
https://scitechdaily.com/whos-to-die-and-whos-to-live-mechanical-cue-is-at-the-origin-of-cell-death-decision/

Quote
In past studies, researchers have found that C. elegans gonads generate more germ cells than needed and that only half of them grow to become oocytes, while the rest shrinks and die by physiological apoptosis, a programmed cell death that occurs in multicellular organisms. Now, scientists from the Biotechnology Center of the TU Dresden (BIOTEC), the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), the Cluster of Excellence Physics of Life (PoL) at the TU Dresden, the Max Planck Institute for the Physics of Complex Systems (MPI-PKS), the Flatiron Institute, NY, and the University of California, Berkeley, found evidence to answer the question of what triggers this cell fate decision between life and death in the germline.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/06/2021 12:24:47
At this point it should be clear that any new information  must be related to preexisting common knowledge for it to be meaningful.
Here is another example. Someone gives us a message, 11001010.
There are many ways to interpret this. It could be a decimal number, or other base number such as hexadecimal or binary. Even in binary, we can treat it as signed or unsigned. Some of the bits can be a start bit, stop bit, or parity bit.
It could be treated as binary coded decimal.
It could also be a Morse code.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/06/2021 14:21:42
Due to traffic jam, it recommended to take an alternative route.
A common way to reduce traffic jam is by applying odd-even rule. On odd dates, only vehicles with odd plate number are allowed to pass, and vice versa. Assuming that the plate numbers are generally  assigned consecutively, the least significant bit suddenly becomes the most important bit.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/06/2021 17:10:28
In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
The number of bit specifies the quantity of information. Its conformity with objective reality as the ground truth specifies the quality of the information. Those concepts are similar to precision and accuracy, respectively.
Previously, I've created a thread specifically discussing about accuracy and precision from a practical perspective. I tried to quantify the data quality and quantity to be used in a database system that virtualize plant operations to make them more manageable. I wanted to use the most general forms as possible so they can be used flexibly for wide range of applications. Perhaps my approach was considered unconventional that it should be put in new theory section.
In measurement problems, our results are compared to a unit of measurement, and expressed in a number. The value may be accompanied by tolerance or quantization of uncertainty, due to the measurement methods or some unpredictable external factors. We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 09/06/2021 21:17:38

Hello Yusuf!
🙏

I am quite interested & enthusiastic about this particular Subject.
👍
But surely Not as much as You are.

Just wanted to say, this OP is quite a Good Read for anyone who's interested on the Topic.
👌


P.S. - Rather than googling for similar articles, I'd just visit in here n read it back to back.
👍
You ' Quote ' information, also provide Official Links for further details & post Images too.
😇
Very Nice & Good Work!
✌️
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 00:19:11
Hi Zer0. Thank you for your kind words. I really appreciate it. It gives me a positive feedback that I am going to the right direction.
I also appreciate some negative feedbacks to let me know if I made mistakes or misunderstand some concepts. They could help me avoid further mistakes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 11:09:03
Autonomous flying vehicles in 'smart cities' - NASA working on infrastructure


Quote
Data and Reasoning Fabric (DRF) could one day "assemble and provide useful information to autonomous vehicles in real time. The information system is being developed by NASA.

Credit: NASA
Here is the latest development of shared virtual universe among autonomous vehicles. It's a step closer toward a unified virtual universe that is the idea behind this thread, although it's usage is still limited to autonomous vehicles only. The next step would be integration between this system with other virtualization systems already established, such as governments and corporations.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/06/2021 22:40:39
https://venturebeat.com/2021/06/09/deepmind-says-reinforcement-learning-is-enough-to-reach-general-ai/

Quote
In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

...

In the race of developing AI, besides of hardware capacity, the results depend on the choosing of reward function. It's like choosing the instrumental goals which are aligned with the terminal goals. The natural long term reward is survival. Nature also provides short term reward function through pleasure and pain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/06/2021 08:08:45
"Why Is Quantum Computing So Hard to Explain | Quanta Magazine" https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/
Quote
Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.

In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.

Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
Quote
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 07:17:19
There's Plenty Moore Room: IBM's New 2nm CPU
Quote
People talk about the death of semiconductors being able to shrink. IBM is laughing in your face - there's plenty of room, and plenty of density, and they've developed a proof of concept to showcase where the technology can go. Here's a look at IBM's new 2nm silicon.

Intro

0:00 The Future in 2024
0:26 What Nanometers Really Mean
3:05 Transistor Density
4:02 IBM on 2nm
5:38 Comparing against current nodes
7:00 What's on the chip
7:40 Gate-All-Around Nanosheets
8:45 Albany, NY
9:16 Performance of 2nm
9:42 Coming to Market and Pathfinding
11:06 EUV and Future of EUV (Jim Keller)
14:12 Minimum Specification: Bite a Wafer
14:39 Cat Tax
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 16:18:10
We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.



The smallest base number for numerical writings is 2. That's why most computer are binary system. For human machine interface such as programming languages, some extension of binary code are often useful, such as octal, hexadecimal, or BCD.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 22:06:06
Quote
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

"Cybersecurity experts face a new challenge: AI capable of tricking them" https://www.inputmag.com/culture/cybersecurity-experts-face-a-new-challenge-ai-capable-of-tricking-them/amp

Quote
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/06/2021 23:26:49
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
This brings us back to the question about accuracy and precision of our information sources. Here are definitions of precision by the dictionary.
Quote
the quality, condition, or fact of being exact and accurate.
"the deal was planned and executed with military precision"

TECHNICAL
refinement in a measurement, calculation, or specification, especially as represented by the number of digits given.
"a precision of six decimal figures"
And here are the definitions of accuracy.
Quote
the quality or state of being correct or precise.
"we have confidence in the accuracy of the statistics"

TECHNICAL
the degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard.
"the accuracy of radiocarbon dating"

We can see here that in general definition, the meanings of precision and accuracy are mixed. While in technical definition, it's restricted to numeric writing, especially in decimal based number. We can quickly realize that those definitions can't cover all kinds of usage of the word.

In technical usage, non-number information can't be described. For example, Alice is going to Japan. It would be more precise if it's said that she's going to Tokyo. Even more precise if the district or even the complete address were given. But if it turns out that she's going to Kyoto instead of Tokyo, then the previous information about the destination city is not accurate, although still more precise than just the destination country.

Expression of the same numeric value but in different base number would give us different precision.

In general usage, it should be possible to express information with high precision independently from accuracy. There are accurate but imprecise information. On the other hand, there are also precise but inaccurate information.

This video tries to distinct them.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/06/2021 08:59:01
Voluntarist Epistemology

This video also contains an example of balancing between accuracy and precision, especially from 26:40 to 31:26

Quote
According to Bas van Fraassen's voluntarist epistemology, the only constraint on rational belief is consistency. Beyond this, our beliefs must be guided not by rules of reason, but by the passions: emotions, values, and intuitions. This video examines the grounds for voluntarism in the failure of traditional epistemology, and in the need for an epistemology that can properly accommodate conceptual revolutions. Then I turn to the objections to voluntarism.

Outline of voluntarism:
0:00 - Introduction
4:02 - Why consistency?
8:13 - Failure of traditional epistemology
18:37 - Voluntarism against skepticism
31:26 - Conceptual revolution and objectifying epistemology
Objections to voluntarism:
48:38 - Arbitrariness
53:00 - Too permissive?
1:01:34 - Too conservative?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/06/2021 12:46:36
Expression of the same numeric value but in different base number would give us different precision.
Since binary is the smallest base number, it would be preferred to express precision.  So, the precision of an information depends on how many bits its content is.
In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.

https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5858d28deea4237a7c1320f7e649fb104aecb0e5)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/908c155d6002beadf2df5a7c05e954ec2373ca16)

https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/IEEE_754_Double_Floating_Point_Format.svg/618px-IEEE_754_Double_Floating_Point_Format.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/61345d47f069d645947b9c0ab676c75551f1b188)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5f677b27f52fcd521355049a560d53b5c01800e1)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 05:17:42
The Longest DNA in the Animal Kingdom Found - Not What I Expected

DNA is the largest information storage method provided by nature. Studying how it works is highly important.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 09:43:31
Expression of the same numeric value but in different base number would give us different precision.
Since binary is the smallest base number, it would be preferred to express precision.  So, the precision of an information depends on how many bits its content is.
In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.

https://en.wikipedia.org/wiki/Single-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Float_example.svg/590px-Float_example.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5858d28deea4237a7c1320f7e649fb104aecb0e5)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/908c155d6002beadf2df5a7c05e954ec2373ca16)

https://en.wikipedia.org/wiki/Double-precision_floating-point_format
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/IEEE_754_Double_Floating_Point_Format.svg/618px-IEEE_754_Double_Floating_Point_Format.svg.png)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/61345d47f069d645947b9c0ab676c75551f1b188)
(https://wikimedia.org/api/rest_v1/media/math/render/svg/5f677b27f52fcd521355049a560d53b5c01800e1)
It's clear that bits in different positions in the floating point representation have different significance in determining the numeric value of the data. The significance of the bit can be defined as the difference of the data value caused by its flipping between 0 and 1. In general, they are sorted from highest to lowest significance (from left to right position in writing); except for sign bit, whose significance depends on the value determined by other bits. If it's small, then the sign bit has low significance. On the other hand, if the value from other bits is big, the sign bit has high significance.
 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 21:55:45
In real life experience, we often get/use numerical information with even lower precision than what's expressed by single precision floating point. In many applications, it's enough to write π as 3.14.
In floating point representation, 3 digit of decimal number can be written using 10 bits of fraction part. The rest of the bits are rounded to 0, whose actual value we don't care.
By defining precision as quantity of information, we can use it in numeric as well non-numeric data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/06/2021 23:25:53
As I mentioned earlier,  actual/practical precision of an information also depends on the assumptions assigned to it. For example, if I say that your car key is in Waldo's pocket, you would be able to quickly find it, as long as you can find Waldo first. In this case, my explicit statement only contains a few bits of information. But it can become highly precise when it's combined with correct assumptions not expressed in my statement. Like which Waldo I'm talking about.
Another example, if I say that the value of x equals 2π, modern people would recognize it with very high precision. It's because the symbols carry almost unambiguous meaning in modern world. It would be different in ancient times.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 05:37:56
The next problem is the accuracy of the information. Let's start with a non-numeric case, such as finding Waldo in a picture.
(https://fiverr-res.cloudinary.com/images/q_auto,f_auto/gigs/140639081/original/7e7a04151cd0f368c6d56e4fd7abf5d02897b4e4/find-wally-or-waldo-for-you.jpg)

Saying that Waldo is in the picture is accurate, but not precise.
Saying that Waldo is at the bottom right corner of the picture is more precise, but not accurate.
Saying that Waldo is around the center of the picture, not far away from the red tent is more accurate and precise.

The first and third statements are accurate because they include the true value of Waldo's position.
The second statement becomes inaccurate because it excludes the true value of Waldo's position.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 05:43:49
The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust
Quote
The Cerebras Wafer-Scale Engine is 8.5 inches wide and contains 1.2 trillion transistors. The next biggest chip, the NVIDIA A100 GPU, measures one inch at a time and has only 54 billion transistor. The WSE has made its way into a handful of supercomputing labs, including the National Energy Technology Laboratory. Researchers pitted the chip against a supercomputer in a fluid dynamics simulation and found it to be faster than the supercomputer. The team said that the chip completed a combustion simulation in a power plant approximately 200 times faster.

Joule is the 81st fastest supercomputer in the world, with a price tag of $1.2 billion. The WSE is bigger than the average supercomputer, and it's all about design. The company uses couriers to send and collect documents from other branches and archives across the city. It's like an old-fashioned company doing all its business on paper, but on silicon wafers, and the process takes place within a silicon wafer, not a sheet of paper. The CS-1 is the world's largest supercomputer.

Cerebras has developed a chip that can handle problems small enough to fit on a wafer. The megachip is far more efficient than a traditional supercomputer that needs a ton of traditional chips to be networked. Next-generation chip will have 2,6 trillion transistors, 850,00 cores, and more than double the memory. It still remains to be seen whether wafer-scale computing really does take off, but Cerebras is the first to seriously pursue it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2021 06:16:37
The next problem is the accuracy of the information. Let's start with a non-numeric case, such as finding Waldo in a picture.
Unlike precision, which can be determined without knowing the true value of the information, accuracy cannot be determined without knowing the true value of the information.
Saying that π is more than 0 is accurate because it doesn't contain false information. But saying that it's less than 3.141 is not accurate because it contains false information.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/06/2021 10:11:48
Precision of an information should be considered as the amount of uncertainty that it can remove. Number of bits alone is not adequate.
Here is an example.
Many bits in first statement don't  remove more uncertainty compared to fewer bits in the second statement. So, we can't say that the first statement has higher precision than the second, although it contains many more bits. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/06/2021 11:02:48
The World’s Most Powerful Supercomputer Is Almost Here

Quote
The next generation of computing is on the horizon, and several new machines may just smash all the records...with two nations neck and neck in a race to get there first.

The ENIAC was capable of about 400 FLOPS. FLOPS stands for floating-point operations per second, which basically tells us how many calculations the computer can do per second. This makes measuring FLOPS a way of calculating computing power.

So, the ENIAC was sitting at 400 FLOPS in 1945, and in the ten years it was operational, it may have performed more calculations than all of humanity had up until that point in time—that was the kind of leap digital computing gave us. From that 400 FLOPS we upgraded to 10,000 FLOPS, and then a million, a billion, a trillion, a quadrillion FLOPS. That’s petascale computing, and that’s the level of today’s most powerful supercomputers.

But what’s coming next is exascale computing. That’s zeroes. 1 quintillion operations per second. Exascale computers will be a thousand times better performing than the petascale machines we have now. Or, to put it another way, if you wanted to do the same number of calculations that an exascale computer can do in ONE second...you’d be doing math for over 31 billion years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/06/2021 03:16:37
The virtual universe is useless unless it can be expressed into actions in objective reality. 3D printing improves the interface between those two universes.


Quote
Three-dimensional printing promises new opportunities for more sustainable and local production. But does 3D printing make everything better? This film shows how innovation can change the world of goods.

Is the way we make things about to become the next revolution? Traditional manufacturing techniques like milling, casting and gluing could soon be replaced by 3D printing -saving enormous amounts of material and energy. Aircraft maker Airbus is already benefiting from the new manufacturing method. Beginning this year, the A350 airliner will fly with printed door locking shafts. Where previously ten parts had to be installed, today that’s down to just one. It saves a lot of manufacturing steps. And 3D printing can imitate nature's efficient construction processes, something barely possible in conventional manufacturing. Another benefit of the new technology is that components can become significantly lighter and more robust, and material can be saved during production. But the Airbus development team is not yet satisfied. The printed cabin partition in the A350 has become 45 percent lighter thanks to the new structure, but it is complex and expensive to manufacture. It takes 900 hours to print just one partition, a problem that print manufacturers have not yet been able to solve. The technology is already being used in Adidas shoes: The sportswear company says it is currently the world’s largest manufacturer of 3D-printed components. The next step is sustainable materials, such as biological synthetic resins that do not use petroleum and can be liquefied again without loss of quality and are therefore completely recyclable. This documentary sheds light on the diverse uses of 3D printing.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 22/06/2021 06:02:32
About as far away as when we first started. As one diode said to the other we have been together for so long and I still don't know you.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/06/2021 10:50:05
Precision of an information should be considered as the amount of uncertainty that it can remove. Number of bits alone is not adequate.
Here is an example.
  • 2.99999999... ≤ π ≤3.9999999...
  • 3 ≤ π ≤ 4
Many bits in first statement don't  remove more uncertainty compared to fewer bits in the second statement. So, we can't say that the first statement has higher precision than the second, although it contains many more bits. 
It looks like the equation sign implicitly puts two limits at once, which are low and high limits of the value. When we say that two values are identical,  or exactly the same by definition, we can use  ≡ symbol. But if they are approximately equal, we use ≈ symbol. It means that we acknowledge that there are cases where the difference can't be neglected.

The usage of = symbol then leaves some ambiguity. The values involved in the equation are not necessarily identical, but  the difference between them must be negligible in almost all cases.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/06/2021 10:14:10
https://towardsdatascience.com/cant-access-gpt-3-here-s-gpt-j-its-open-source-cousin-8af86a638b11
Similar to GPT-3, and everyone can use it.

Quote

ARTIFICIAL INTELLIGENCE
Can’t Access GPT-3? Here’s GPT-J — Its Open-Source Cousin
Similar to GPT-3, and everyone can use it.

The AI world was thrilled when OpenAI released the beta API for GPT-3. It gave developers the chance to play with the amazing system and look for new exciting use cases. Yet, OpenAI decided not to open (pun intended) the API to everyone, but only to a selected group of people through a waitlist. If they were worried about the misuse and harmful outcomes, they’d have done the same as with GPT-2: not releasing it to the public at all.
It’s surprising that a company that claims its mission is “to ensure that artificial general intelligence benefits all of humanity” wouldn’t allow people to thoroughly investigate the system. That’s why we should appreciate the work of people like the team behind EleutherAI, a “collective of researchers working to open source AI research.” Because GPT-3 is so popular, they’ve been trying to replicate the versions of the model for everyone to use, aiming at building a system comparable to GPT-3-175B, the AI king. In this article, I’ll talk about EleutherAI and GPT-J, the open-source cousin of GPT-3. Enjoy!
Quote
GPT-J is 30 times smaller than GPT-3-175B. Despite the large difference, GPT-J produces better code, just because it was slightly more optimized to do the task. This implies that optimization towards improving specific abilities could give rise to systems that are way better than GPT-3. And this isn’t limited to coding: we could create for every task, a system that would top GPT-3 with ease. GPT-3 would become a jack of all trades, whereas the specialized systems would be the true masters.
This hypothesis goes in line with the results OpenAI researchers Irene Solaiman and Christy Dennison got from PALMS. They fine-tuned GPT-3 with a small curated dataset to prevent the system from producing biased outputs and got amazing results. In a way, it was an optimization; they specialized GPT-3 to be unbiased — as understood by ethical institutions in the U.S. It seems that GPT-3 isn’t only very powerful, but that a notable amount of power is still latent within, waiting to be exploited by specialization.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/06/2021 13:11:13
GPT-J is 30 times smaller than GPT-3-175B. Despite the large difference, GPT-J produces better code, just because it was slightly more optimized to do the task. This implies that optimization towards improving specific abilities could give rise to systems that are way better than GPT-3.
It looks like the way to general intelligence is by combining several neural networks trained separately for specific tasks. A dedicated network would be needed to determine which part would be suitable to solve the problem at hand.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/06/2021 21:44:21
What he's trying to build is basically similar to a virtual universe. Note that this video was uploaded 7 years ago.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/07/2021 11:22:31
https://www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code

Quote
GitHub and OpenAI have launched a technical preview of a new AI tool called Copilot, which lives inside the Visual Studio Code editor and autocompletes code snippets.

Copilot does more than just parrot back code it’s seen before, according to GitHub. It instead analyzes the code you’ve already written and generates new matching code, including specific functions that were previously called. Examples on the project’s website include automatically writing the code to import tweets, draw a scatterplot, or grab a Goodreads rating.

Quote
GitHub sees this as an evolution of pair programming, where two coders will work on the same project to catch each others’ mistakes and speed up the development process. With Copilot, one of those coders is virtual.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/07/2021 03:45:23
https://jrodthoughts.medium.com/objects-that-sound-deepminds-research-show-how-to-combine-vision-and-audio-in-a-single-model-c4051ea21495
Quote

Since we are babies, we intuitively develop the ability to correlate the input from different cognitive sensors such as vision, audio and text. While listening to a symphony we immediately visualize an orchestra or when admiring a landscape painting, our brain associates the visual with specific sounds. The relationships between images, sounds and texts are dictated by connections between different sections of the brain responsible from analyzing specific cognitive input. In that sense, you can say that we are hardwired to learn simultaneously from multiple cognitive signals. Despite the advancements in different deep learning areas such as image, language and sound analysis, most neural networks remain specialized on a single input data type. A few years ago, researchers from Alphabet’s subsidiary DeepMind published a research paper proposing a method that can simultaneously analyze audio and visual inputs and learn the relationships between objects and sounds in a common environment.
(https://miro.medium.com/max/2100/1*hFzT9BNIL6FopN9tkch29w.png)
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 10:25:20
Does PlayStation count.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 13:00:11
Does PlayStation count.
It does help improving the technology and accumulating financial resources for that. Although their main purpose may not be directly correlated.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 13:03:48
Although their main purpose may not be directly correlated.
What about Xbox.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 13:12:26
Although their main purpose may not be directly correlated.
What about Xbox.
Same story.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 14:09:06
Although their main purpose may not be directly correlated.
I just thought of something. What if we are already in a virtual universe we will have to try and build a real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/07/2021 15:12:21
Although their main purpose may not be directly correlated.
I just thought of something. What if we are already in a virtual universe we will have to try and build a real universe.
As long as we have no reliable way to proof otherwise, it's better for us to assume that we're living in reality. Descartes' Cogito tells us that our own consciousness is the only self evident proof of our existence.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 04/07/2021 15:42:30
As long as we have no reliable way to proof otherwise, it's better for us to assume that we're living in reality. Descartes' Cogito tells us that our own consciousness is the only self evident proof of our existence.
I think it is safe to assume that our consciousness is merely a circuit board plugged into the motherboard that is programmed to make some decisions inside the virtual reality life that we only virtually think we have. I could be wrong but if I am then that would be a falt in the electronics of the virtual reality machine. eg. When I get a headache this can be due to computer overload.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 05:40:06
I think it is safe to assume that our consciousness is merely a circuit board plugged into the motherboard that is programmed to make some decisions inside the virtual reality life that we only virtually think we have. I could be wrong but if I am then that would be a falt in the electronics of the virtual reality machine. eg. When I get a headache this can be due to computer overload.
I don't think that you are safe thinking that way. Imagine you are a bit drunk on your bed, staring out of your window. You see an asteroid flying right in your direction. You're not sure if it's real or you're just dreaming, or you are just living in a simulation. There's apparently not enough time to determine which one is true.

The best bet is by assuming that it's real, and you should get out as fast as you can. Even if you're wrong, the result would be less detrimental than assuming otherwise.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 06:08:08
I don't think that you are safe thinking that way.
I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 07:46:29
I don't think that you are safe thinking that way.
I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
In my previous example I was thinking about a small asteroid capable of destroying a house.
A virtual universe, or even a nested virtual universe, eventually must be build upon a real universe. It's impossible for a virtual universe to exist when no real universe is there.
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 08:08:21
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 11:15:44
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 11:32:26
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
I see what you're saying but the flight simulator could be dangerous as the captain could spill hot coffee on his lap or even worse. He will learn not to do that in the real univers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 12:22:48
You can kill thousands of people in GTA or Total War without being evil in real life.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 12:37:34
You can kill thousands of people in GTA or Total War without being evil in real life.
I don't like violent games they incite violence in the real universe. But I do get your point thank you.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 12:48:24
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 13:30:05
Is it possible to build a virtual universe?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 14:56:36
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Any scalable virtual universe must be built as vectors or tensors instead of pixels, especially when it's multidimensional.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/07/2021 15:05:06
Is it possible to build a virtual universe?
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 05/07/2021 15:29:03
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/07/2021 19:47:02
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
What I meant was not about world simulation like Matrix the movie. They are more mundane and narrow purposed, such as Google earth, climate simulation, alphafold, Tesla's Dojo and vertical integration, Microsoft Flight Simulator, SAP ERP, Chinese government's surveillance system, Estonia's digital governance, financial/banking systems, crypto currency, Virtual Machines to manage workstations, etc. They try to represent some aspects of objective reality for easier access to extract information, aggregate and manage them, and help with decision making process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/07/2021 19:49:47
"Exclusive Q&A: Neuralink’s Quest to Beat the Speed of Type - IEEE Spectrum" https://spectrum.ieee.org/tech-talk/biomedical/bionics/exclusive-neuralinks-goal-of-bestinworld-bmi
Quote
Elon Musk’s brain tech company, Neuralink, is subject to rampant speculation and misunderstanding. Just start a Google search with the phrase “can Neuralink...” and you’ll see the questions that are commonly asked, which include “can Neuralink cure depression?” and “can Neuralink control you?” Musk hasn’t helped ground the company’s reputation in reality with his public statements, including his claim that the Neuralink device will one day enable “AI symbiosis” in which human brains will merge with artificial intelligence.

It’s all somewhat absurd, because the Neuralink brain implant is still an experimental device that hasn’t yet gotten approval for even the most basic clinical safety trial.

But behind the showmanship and hyperbole, the fact remains that Neuralink is staffed by serious scientists and engineers doing interesting research. The fully implantable brain-machine interface (BMI) they’ve been developing is advancing the field with its super-thin neural “threads” that can snake through brain tissue to pick up signals and its custom chips and electronics that can process data from more than 1000 electrodes.
Quote
IEEE Spectrum: Elon Musk often talks about the far-future possibilities of Neuralink; a future in which everyday people could get voluntary brain surgery and have Links implanted to augment their capabilities. But whom is the product for in the near term?

Joseph O’Doherty: We’re working on a communication prosthesis that would give back keyboard and mouse control to individuals with paralysis. We’re pushing towards an able-bodied typing rate, which is obviously a tall order. But that’s the goal.

We have a very capable device and we’re aware of the various algorithmic techniques that have been used by others. So we can apply best practices engineering to tighten up all the aspects. What it takes to make the BMI is a good recording device, but also real attention to detail in the decoder, because it’s a closed-loop system. You need to have attention to that closed-loop aspect of it for it to be really high performance.

We have an internal goal of trying to beat the world record in terms of information rate from the BMI. We’re extremely close to exceeding what, as far as we know, is the best performance. And then there’s an open question: How much further beyond that can we go?

My team and I are trying to meet that goal and beat the world record. We’ll either nail down what we can, or, if we can’t, figure out why not, and how to make the device better.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 07/07/2021 00:10:09
Thank you my friend that is very interesting information I think medical science and I.T is making great progress we will have to see what the future holds.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/07/2021 05:20:27
https://venturebeat.com/2021/07/05/the-future-of-deep-learning-according-to-its-pioneers/
Quote
In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning.

Titled “Deep Learning for AI,” the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems.

Quote
In their paper, Bengio, Hinton, and LeCun acknowledge these shortcomings. “Supervised learning, while successful in a wide variety of tasks, typically requires a large amount of human-labeled data. Similarly, when reinforcement learning is based only on rewards, it requires a very large number of interactions,” they write.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 07/07/2021 14:21:48
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/07/2021 10:51:06

Moore's Law is dead, right? Not if we can get working photonic computers.

Lightmatter is building a photonic computer for the biggest growth area in computing right now, and according to CEO Nick Harris, it can be ordered now and will ship at the end of this year. It's already much faster than traditional electronic computers a neural nets, machine learning for language processing, and AI for self-driving cars.

It's the world's first general purpose photonic AI accelerator, and with light multiplexing -- using up to 64 different colors of light simultaneously -- there's long path of speed improvements ahead.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/07/2021 10:59:41
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
At first the interface would likely be similar to currently existing human-machine interfaces, such as monitor, camera, keyboard, mouse, touchscreen, speaker, microphone, VR and AR. But eventually, as direct brain interface gets better and reliable, those devices will be slowly replaced due to their speed limitation which will become a communication bottleneck.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 08/07/2021 11:11:13
At first the interface would likely be similar to currently existing human-machine interfaces,
Thank you for the info hamdani, Looks like good things on the way. We will be like kids again.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/07/2021 05:56:58
At first the interface would likely be similar to currently existing human-machine interfaces,
Thank you for the info hamdani, Looks like good things on the way. We will be like kids again.
Some parts of the virtual universe would be intended to represent objective reality as it is, as acurate and precise as possible. The other parts would try to simulate as much as possible the consequences of our decisions, to try to achieve best case and avoid worst case scenario. It's similar to the mind of chess players who memorize current position while figuring out their possible next moves and their opponents' replies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/07/2021 09:44:01
Quote
Scientists have made great progress to decode thoughts with artificial intelligence. In this video I summarize the most exciting recent developments.

The first paper about inferring the meaning of nouns is:

Mitchell et el
"Predicting Human Brain Activity Associated with the Meanings of Nouns"
Science, 1191-1195 (2008)
https://science.sciencemag.org/content/320/5880/1191

The paper about extracting speech from brain readings is:

Anumanchipalli, Chartier, & Chang
"Speech synthesis from neural decoding of spoken sentences"
Nature 568, 493–498 (2019)
https://www.nature.com/articles/s41586-019-1119-1?fbclid=IwAR0yFax5f_drEkQwOImIWKwCE-xdglWzL8NJv2UN22vjGGh4cMxNqewWVSo

There are more examples of the reconstructed sentences here:

https://www.ucsf.edu/news/2019/04/414296/synthetic-speech-generated-brain-recordings

The paper about extracting images from brain readings is:
Shen et al
PLoS Comput Biol. 15(1): e1006633 (2019)
https://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1006633

And the brain to text paper using handwriting is:

Willett et al
High-performance brain-to-text communication via handwriting
Nature 593, 249–254 (2021)
https://www.nature.com/articles/s41586-021-03506-2

0:00 Intro
0:33 How to measure brain activity
2:44 Brain to Word
5:42 Brain to Image
6:30 Brain to Speech
7:25 Brain to Text
8:29 Better ways to measure brain activity
10:20 Sponsor Message
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/07/2021 06:47:42
And this video shows how our model of reality can affect our decisions, with consequences that we will get in the future.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/07/2021 06:51:03

Is artificial intelligence replacing lawyers and judges? Throwback to Ronny Chieng’s report on how robots are taking over the legal system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/07/2021 10:03:12
Quote
The government is introducing what it terms the 'social credit score scheme' in Hangzhou, China. The system will monitor everything from traffic offenses to how people handle their parents. It is currently being piloted in the eastern provincial capital of Hangzhou but has not yet been implemented. The government uses blacklists to limit people's actions or to refuse them such programs. The structure could create all sorts of rifts between neighbors, employers, and even mates.

Social feedback results would come in part from 'residential committees' responsible for tracking and documenting people's behavior. Social credit ratings were already rolled out in 2020 and now due to events of the recent year have only accelerated its widespread adoption. It remains to be seen if the fear of a low score would be enough to alter people's actions outside of limiting travel, regardless of government databases.
For these scenario to be successful and sustainable, the government as well as the people need to understand the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/07/2021 07:27:43
https://tech.fb.com/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech-communication/

Quote
Tags: ARaugmented realityBCIbrain-computer interfaceFacebook Reality LabsFRLhardwareUCSF
 
TL;DR: Today, we’re excited to celebrate milestone new results published by our UCSF research collaborators in The New England Journal of Medicine, demonstrating the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting to speak. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the vocal tract. This study marks an important milestone for the field of neuroscience, and it concludes Facebook’s years-long collaboration with UCSF’s Chang Lab.

These groundbreaking results show what’s possible — both in clinical settings like Chang Lab, and potentially for non-invasive consumer applications such as the optical BCI we’ve been exploring over the past four years.

To continue fostering optical BCI explorations across the field, we want to take this opportunity to open source our BCI software and share our head-mounted hardware prototypes to key researchers and other peers to help advance this important work. In the meantime, Facebook Reality Labs will focus on applying BCI concepts to our electromyography (EMG) research to dramatically accelerate wrist-based neural interfaces for intuitive AR/VR input.

The room was full of UCSF scientists and equipment — monitors and cables everywhere. But his eyes were fixed on a single screen displaying two simple words: “Good morning!”

Though unable to speak, he attempted to respond, and the word “Hello” appeared.

The screen went black, replaced by another conversational prompt: “How are you today?”

This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.

A simple conversation, yet it amounted to a significant milestone in the field of neuroscience. More importantly, it was the first time in over 16 years that he’d been able to communicate without having to use a cumbersome head-mounted apparatus to type out what he wanted to say, after experiencing near full paralysis of his limbs and vocal tract following a series of strokes. Now he simply had to attempt speaking, and a computer could share those words in real time — no typing required.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 15/07/2021 12:29:22
This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.
That is amazing outright mindreading in action.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/07/2021 01:21:35
That is amazing outright mindreading in action.
When the technology is refined, it can revolutionize our communication. All conversation in this thread would be finished in just a few seconds.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/07/2021 01:28:39
https://scitechdaily.com/the-virus-trap-hollow-nano-objects-made-of-dna-could-trap-viruses-and-render-them-harmless/
Quote
To date, there are no effective antidotes against most virus infections. An interdisciplinary research team at the Technical University of Munich (TUM) has now developed a new approach: they engulf and neutralize viruses with nano-capsules tailored from genetic material using the DNA origami method. The strategy has already been tested against hepatitis and adeno-associated viruses in cell cultures. It may also prove successful against coronaviruses.

There are antibiotics against dangerous bacteria, but few antidotes to treat acute viral infections. Some infections can be prevented by vaccination but developing new vaccines is a long and laborious process.

Now an interdisciplinary research team from the Technical University of Munich, the Helmholtz Zentrum München, and the Brandeis University (USA) is proposing a novel strategy for the treatment of acute viral infections: The team has developed nanostructures made of DNA, the substance that makes up our genetic material, that can trap viruses and render them harmless.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/07/2021 07:59:16
Why “probability of 0” does not mean “impossible” | Probabilities of probabilities, part 2

Quote
Curious about measure theory?  This does require some background in real analysis, but if you want to dig in, here is a textbook by the always great Terence Tao.
https://terrytao.files.wordpress.com/...

Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
Title: Re: How close are we from building a virtual universe?
Post by: Just thinking on 20/07/2021 18:54:58
Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
I found the video very difficult to understand As my brain is not wired for this logic. I can understand simple statistics and likelihoods as with the coin flip my way of seeing this is that the likelihood hood of the coin landing on the same side 10x is 1 in 1,024 ore the likelihood of the coin landing 5 up and 5 down is 50 50. The likelihood is a simple satistical chance and is by no means a constant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/07/2021 08:35:59
Also, for the real analysis buffs among you, there was one statement I made in this video that is a rather nice puzzle.  Namely, if the probabilities for each value in a given range (of the real number line) are all non-zero, no matter how small, their sum will be infinite.  This isn't immediately obvious, given that you can have convergent sums of countable infinitely many values, but if you're up for it see if you can prove that the sum of any uncountable infinite collection of positive values must blow up to infinity.
I found the video very difficult to understand As my brain is not wired for this logic. I can understand simple statistics and likelihoods as with the coin flip my way of seeing this is that the likelihood hood of the coin landing on the same side 10x is 1 in 1,024 ore the likelihood of the coin landing 5 up and 5 down is 50 50. The likelihood is a simple satistical chance and is by no means a constant.
Try this.
https://www.omnicalculator.com/statistics/coin-flip-probability
(https://www.thenakedscientists.com/forum/index.php?action=dlattach;topic=77747.0;attach=32208)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/07/2021 05:56:12
https://scitechdaily.com/deepmind-releases-accurate-picture-of-the-human-proteome-the-most-significant-contribution-ai-has-made-to-advancing-scientific-knowledge-to-date/
Quote
DeepMind and EMBL release the most complete database of predicted 3D structures of human proteins.

Partners use AlphaFold, the AI system recognized last year as a solution to the protein structure prediction problem, to release more than 350,000 protein structure predictions including the entire human proteome to the scientific community.

DeepMind today announced its partnership with the European Molecular Biology Laboratory (EMBL), Europe’s flagship laboratory for the life sciences, to make the most complete and accurate database yet of predicted protein structure models for the human proteome. This will cover all ~20,000 proteins expressed by the human genome, and the data will be freely and openly available to the scientific community. The database and artificial intelligence system provide structural biologists with powerful new tools for examining a protein’s three-dimensional structure, and offer a treasure trove of data that could unlock future advances and herald a new era for AI-enabled biology.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/07/2021 06:16:59
https://www.bbc.co.uk/news/technology-57942909
Quote
Mark Zuckerberg has laid out his vision to transform Facebook from a social media network into a “metaverse company” in the next five years.

A metaverse is an online world where people can game, work and communicate in a virtual environment, often using VR headsets.

The Facebook CEO described it as “an embodied internet where instead of just viewing content - you are in it”.
It looks like it's closer than many of us are thinking.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/07/2021 12:59:23
https://neurosciencenews.com/aging-junk-dna-18975/
Potential Role of ‘Junk DNA’ Sequence in Aging and Cancer Identified
Quote
Summary: VNTR2-1, a recently identified region of DNA, appears to drive the activity of the telomerase gene. The telomerase gene has previously been found to prevent aging in specific cells.

Source: Washington State University
Quote
The telomerase gene controls the activity of the telomerase enzyme, which helps produce telomeres, the caps at the end of each strand of DNA that protect the chromosomes within our cells. In normal cells, the length of telomeres gets a little bit shorter every time cells duplicate their DNA before they divide. When telomeres get too short, cells can no longer reproduce, causing them to age and die.

However, in certain cell types–including reproductive cells and cancer cells–the activity of the telomerase gene ensures that telomeres are reset to the same length when DNA is copied. This is essentially what restarts the aging clock in new offspring but is also the reason why cancer cells can continue to multiply and form tumors.

Knowing how the telomerase gene is regulated and activated and why it is only active in certain types of cells could someday be the key to understanding how humans age, as well as how to stop the spread of cancer. That is why Zhu has focused the past 20 years of his career as a scientist solely on the study of this gene.

Zhu said that his team’s latest finding that VNTR2-1 helps to drive the activity of the telomerase gene is especially notable because of the type of DNA sequence it represents.

“Almost 50% of our genome consists of repetitive DNA that does not code for protein,” Zhu said. “These DNA sequences tend to be considered as ‘junk DNA’ or dark matters in our genome, and they are difficult to study. Our study describes that one of those units actually has a function in that it enhances the activity of the telomerase gene.”

Their finding is based on a series of experiments that found that deleting the DNA sequence from cancer cells–both in a human cell line and in mice–caused telomeres to shorten, cells to age, and tumors to stop growing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2021 13:50:37
An article by an AI about AIs writing articles
(https://pbs.twimg.com/media/E7U6tdcX0AMAFb9?format=jpg&name=large)
https://twitter.com/Sentdex/status/1420105928775503882?s=20
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/08/2021 18:09:43
When I Googled articles for virtual universe, I get these.

https://www.nature.com/articles/509170a
Quote
A numerical simulation of cosmic structure formation reproduces both large- and smaller-scale features of a representative volume of the Universe from early in its history to the present day. See Article p.177

Perhaps the greatest triumph of modern cosmology is that a model with only six parameters can explain the vast majority of observational data from the first minutes of the Universe to the present day1. This standard model posits that 95% of the Universe today is composed of enigmatic 'dark matter' and 'dark energy'. Paradoxically, modelling the dynamics of the remaining 5% — normal, 'baryonic' matter — has proved to be the more challenging task. On page 177 of this issue, Vogelsberger et al.2 describe a numerical simulation of the formation of cosmic structure that captures both the large-scale distribution of baryonic material and its properties in individual galactic systems through cosmic time.

https://en.wikipedia.org/wiki/Virtual_world
Quote
A virtual world (also called a virtual space) is a computer-simulated environment[1] which may be populated by many users who can create a personal avatar, and simultaneously and independently explore the virtual world, participate in its activities and communicate with others.[2] These avatars can be textual,[3] graphical representations, or live video avatars with auditory and touch sensations.[4][5]

The user accesses a computer-simulated world which presents perceptual stimuli to the user, who in turn can manipulate elements of the modeled world and thus experience a degree of presence.[6] Such modeled worlds and their rules may draw from reality or fantasy worlds. Example rules are gravity, topography, locomotion, real-time actions, and communication. Communication between users can range from text, graphical icons, visual gesture, sound, and rarely, forms using touch, voice command, and balance senses.

http://spaceengine.org/
Quote
SpaceEngine is a realistic virtual Universe you can explore on your computer. You can travel from star to star, from galaxy to galaxy, landing on any planet, moon, or asteroid with the ability to explore its alien landscape. You can alter the speed of time and observe any celestial phenomena you please. All transitions are completely seamless, and this virtual universe has a size of billions of light-years across and contains trillions upon trillions of planetary systems. The procedural generation is based on real scientific knowledge, so SpaceEngine depicts the universe the way it is thought to be by modern science. Real celestial objects are also present if you want to visit them, including the planets and moons of our Solar system, thousands of nearby stars with newly discovered exoplanets, and thousands of galaxies that are currently known.

They seem to focus on simulating the objective reality with precision and accuracy as their highest priorities, respectively. It seems like there is something more important still missing, if we treat the virtual universe as a tool to help us achieve the universal terminal goal, as what this thread was intended to do. That thing is relevance.

Let's say that someday, somehow we can create a detailed and accurate simulation of a distant planet that we can't reach and won't affect us in foreseeable future. Then the resources used to create that simulation would be better off used for simulation of other parts of the universe which are more relevant in achieving the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 11:57:40
A virtual universe doesn't have to cover the whole universe. A small part of it is enough. The bare minimum is that something is used to represent a characteristic or property of something else.

The virtual universe itself can be characterized in 3 criteria: precision, accuracy, and relevance.

 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 12:04:22
In general form, virtual universe can differentiate the conscious from non-conscious entities. For example, smart cars vs dumb cars.
As I described earlier, the consciousness here means the ability of a system to determine its own future. That's the definition which is most relevant to the universal terminal goal as the main subject of my thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 12:58:57
Our mental map of our surroundings is a form of virtual universe. Simpler organisms also have simpler version of virtual universe. Among unicellular organisms, CRISPR system as defense mechanism can be seen as an outstanding example of virtual universe. They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have. At a glance, it may look costly. But it turns out that the benefits outweigh the costs.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/08/2021 23:07:40
They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have.
It reminds us that a virtual universe doesn't have to be in electronic form. They can have the same materials as the system that's being represented. For example,
Quote
Nearly 8,000 miles from Osama bin Laden's lair, Navy Seal Team Six trained in a mock-up of the compound at a North Carolina Defense Department facility.
https://www.cnet.com/tech/services-and-software/bing-map-shows-cias-secret-bin-laden-compound-mock-up/

The advantages of the model are its accessibility, and safety for experiments to optimize planning, which means cost reduction in trials and errors.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/08/2021 16:31:57
Among unicellular organisms, CRISPR system as defense mechanism can be seen as an outstanding example of virtual universe. They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have. At a glance, it may look costly. But it turns out that the benefits outweigh the costs.
This case emphasizes the relevance of virtual universe. If something is important, you do it anyway even if it's hard, costly, or dangerous.
The bacteria doesn't seem to care how their environment looks like beyond a few microns from their current position. But they care about a chunk of virus DNA that can infect them, because it's relevant to their survival and existence in the future.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/08/2021 08:09:06
Let's say that you received a message from your friend that fajlusd is a phidgymb. This message is meaningless until you can relate those things to other things that you already know.

We accumulate knowledge by relating new information to the existing information we are already familiar with. Analogy is an example.

We build our virtual universe by adding new information and relate it to the existing ones. The first knowledge that we have is our own existence, as asserted by Decarte's cogito ergo sum. It becomes nucleation site of the virtual universe.

Written language has created a form of virtual universe outside of the mind of living organisms. Supercomputer servers in the clouds are just advanced version of it.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 04:38:08
OpenAI can translate English into code with its new machine learning software Codex
‘We see this as a tool to multiply programmers’
https://www.theverge.com/2021/8/10/22618128/openai-codex-natural-language-into-code-api-beta-access
Quote
AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding.

In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like “create a webpage with a menu on the side and title at the top,” and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible.

“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”

OpenAI used an earlier version of Codex to build a tool called Copilot for GitHub, a code repository owned by Microsoft, which is itself a close partner of OpenAI. Copilot is similar to the autocomplete tools found in Gmail, offering suggestions on how to finish lines of code as users type them out. OpenAI’s new version of Codex, though, is much more advanced and flexible, not just completing code, but creating it.

Codex is built on the top of GPT-3, OpenAI’s language generation model, which was trained on a sizable chunk of the internet, and as a result can generate and parse the written word in impressive ways. One application users found for GPT-3 was generating code, but Codex improves upon its predecessors’ abilities and is trained specifically on open-source code repositories scraped from the web.

Quote
“Sometimes it doesn’t quite know exactly what you’re asking,” laughs Brockman. He has a few more tries, then comes up with a command that works without this unwanted change. “So you had to think a little about what’s going on but not super deeply,” he says.

This is fine in our little demo, but it says a lot about the limitations of this sort of program. It’s not a magic genie that can read your brain, turning every command into flawless code — nor does OpenAI claim it is. Instead, it requires thought and a little trial and error to use. Codex won’t turn non-coders into expert programmers overnight, but it’s certainly much more accessible than any other programming language out there.

OpenAI is bullish about the potential of Codex to change programming and computing more generally. Brockman says it could help solve the programmer shortage in the US, while Zaremba sees it as the next step in the historical evolution of coding.

“What is happening with Codex has happened before a few times,” he says. In the early days of computing, programming was done by creating physical punch cards that had to be fed into machines, then people invented the first programming languages and began to refine these. “These programming languages, they started to resemble English, using vocabulary like ‘print’ or ‘exit’ and so more people became able to program.” The next part of this trajectory is doing away with specialized coding languages altogether and replacing it with English language commands.

“Each of these stages represents programming languages becoming more high level,” says Zaremba. “And we think Codex is bringing computers closer to humans, letting them speak English rather than machine code.” Codex itself can speak more than a dozen coding languages, including JavaScript, Go, Perl, PHP, Ruby, Swift, and TypeScript. It’s most proficient, though, in Python.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 05:13:09
Technological Singularity: An Impending "Intelligence Explosion"
We know it’s coming, but is it likely to happen soon?
https://interestingengineering.com/technological-singularity-an-impending-intelligence-explosion
Quote
In this century, humanity is predicted to undergo a transformative experience, the likes of which have not been seen since we first began to speak, fashion tools, and plant crops. This experience goes by various names - "Intelligence Explosion," "Accelerando," "Technological Singularity" - but they all have one thing in common.

They all come down to the hypothesis that accelerating change, technological progress, and knowledge will radically change humanity. In its various forms, this theory cites concepts like the iterative nature of technology, advances in computing, and historical instances where major innovations led to explosive growth in human societies.

Many proponents believe that this "explosion" or "acceleration" will take place sometime during the 21st century. While the specifics are subject to debate, there is general consensus among proponents that it will come down to developments in the fields of computing and artificial intelligence (AI), robotics, nanotechnology, and biotechnology.

In addition, there are differences in opinion as to how it will take place, whether it will be the result of ever-accelerating change, a runaway acceleration triggered by self-replicating and self-upgrading machines, an "intelligence explosion" caused by the birth of an advanced and independent AI, or the result of biotechnological augmentation and enhancement.

Opinions also differ on whether or not this will be felt as a sudden switch-like event or a gradual process spread out over time which might not have a definable beginning or inflection point. But either way, it is agreed that once the Singularity does occur, life will never be the same again. In this respect, the term "singularity" - which is usually used in the context of black holes - is quite apt because it too has an event horizon, a point in time where our capacity to understand its implications breaks down.
(https://inteng-storage.s3.amazonaws.com/img/iea/lV6DbMvDOx/sizes/paradigmshiftsfrr15eventssvg_resize_md.png)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 05:25:57
Quote
The use of the term "singularity" in this context first appeared in an article written by Stanislav Ulam about the life and accomplishments of John von Neumann. In the course of recounting opinions his friend held, Ulam described how the two talked at one point about accelerating change:

"One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which  human affairs, as we know them, could not continue."

However, the idea that humanity may one day achieve an "intelligence explosion" has some precedent that predates Ulam's description. Mahendra Prasad of UC Berkeley, for example, credits 18th-century mathematician Nicolas de Condorcet with making the first recorded prediction, as well as creating the first model for it.

In his essay, Sketch for a Historical Picture of the Progress of the Human Mind: Tenth Epoch (1794), de Condorcet expressed how knowledge acquisition, technological development, and human moral progress were subject to acceleration:

"How much greater would be the certainty, how much more vast the scheme of our hopes if... these natural [human] faculties themselves and this [human body] organization could also be improved?... The improvement of medical practice... will become more efficacious with the progress of reason...

"[W]e are bound to believe that the average length of human life will forever increase... May we not extend [our] hopes [of perfectibility] to the intellectual and moral faculties?... Is it not probable that education, in perfecting these qualities, will at the same time influence, modify, and perfect the [physical] organization?"
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/08/2021 07:31:18
Data compression is at the heart of virtual universe.

Huffman Codes: An Information Theory Perspective

Quote
Huffman Codes are one of the most important discoveries in the field of data compression. When you first see them, they almost feel obvious in hindsight, mainly due to how simple and elegant the algorithm ends up being. But there's an underlying story of how they were discovered by Huffman and how he built the idea from early ideas in information theory that is often missed. This video is all about how information theory inspired the first algorithms in data compression, which later provided the groundwork for Huffman's landmark discovery.

0:00 Intro
2:02 Modeling Data Compression Problems
6:20 Measuring Information
8:14 Self-Information and Entropy
11:03 The Connection between Entropy and Compression
16:47 Shannon-Fano Coding
19:52 Huffman's Improvement
24:10 Huffman Coding Examples
26:10 Huffman Coding Implementation
27:08 Recap
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/08/2021 09:32:39
Is a Knowledge Graph capable of capturing human knowledge?
https://alessandro-negro.medium.com/is-a-knowledge-graph-capable-of-capturing-human-knowledge-8521162f06b2
Quote
In recent years Knowledge Graphs have been used to solve one of the biggest problems not only in machine learning but also in computer science in general: how to represent knowledge.
“Knowledge representation and reasoning is the area of Artificial Intelligence (AI) concerned with how knowledge can be represented symbolically and manipulated in an automated way by reasoning programs. More informally, it is the part of AI that is concerned with thinking, and how thinking contributes to intelligent behavior.” [Brachman and Levesque, 2004]
This aspect is critical since any “agent” — human, animal, electronic, mechanical, to behave intelligently, requires knowledge. Think about us as humans, for a very wide range of activities, we make decisions based on what we effortlessly and unconsciously know (or believe) about the world. Our [intelligent] behaviour is clearly conditioned, if not dominated, by knowledge.
Knowledge representation and reasoning focuses on the knowledge, not the knower. In this context, a graph based representation is becoming one of the most prominent approaches, thanks to its flexibility of representing concepts and relationships amongst them in a simple and generic data structure.
Quote
What is a Knowledge Graph?
For this question there is no gold standard, universally accepted definition, but my favorite is the one given by Gomez-Perez et al. [Gomez-Perez et al., 2020]:
“A knowledge graph consists of a set of interconnected typed entities and their attributes.”
According to this definition, the basic unit of a Knowledge Graph is the representation of an entity, such as a person, organization, or location, or perhaps a sporting event or a book or movie. Each entity might have various attributes. For a person, those attributes would include the name, address, birth date, and so on. Entities are connected to each other by relations: for example, a person works for a company, and a user likes a page or follows another user. Relations can also be used to bridge two separate Knowledge Graphs [Negro, 2021].

Quote
Conclusion
This blog post formally demonstrates how Knowledge Graphs are concretely capable of representing the knowledge available in multiple domains not only in a way that facilitates, at first glance, its exploration and navigation for analysts. The inherent structures and the forces that drive the connection among the entities in the graph coming from the related domain (in our example the biological rules) can be captured and analyzed also by artificial and autonomous agents. The classification represented here is just an example of how machine learning algorithms can be properly fed by graph in such a manner that would be impossible or very hard otherwise. In order to obtain the same accuracy we would have had to collect many common features related to each of the entities we wanted to classify.
It is worth noting here that this effort doesn’t go in the direction of replacing the human capability to analyze this knowledge but it is an empowerment. The capability of processing enormous amounts of data goes beyond human possibilities. Nevertheless, this is why machine learning has been introduced. In any case at the end of these processes, it is always human responsibility to evaluate the insights and, more in general, the results of this analysis and to make informed and wiser decisions based on them.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/08/2021 05:41:04
Self awareness is one criteria for consciousness. We can learn about it from something else similar to us, but simpler. Just like what's shown in the article below.

https://scitechdaily.com/human-brain-organoids-grown-in-lab-with-eyes-that-respond-to-light/
Quote
Human induced pluripotent stem cells (iPSCs) can be used to generate brain organoids containing an eye structure called the optic cup, according to a study published on August 17, 2021, in the journal Cell Stem Cell. The organoids spontaneously developed bilaterally symmetric optic cups from the front of the brain-like region, demonstrating the intrinsic self-patterning ability of iPSCs in a highly complex biological process.

“Our work highlights the remarkable ability of brain organoids to generate primitive sensory structures that are light sensitive and harbor cell types similar to those found in the body,” says senior study author Jay Gopalakrishnan of University Hospital Düsseldorf. “These organoids can help to study brain-eye interactions during embryo development, model congenital retinal disorders, and generate patient-specific retinal cell types for personalized drug testing and transplantation therapies.”
(https://scitechdaily.com/images/Brain-Organoid-With-Optic-Cups-768x681.jpg)

(https://scitechdaily.com/images/Development-of-Brain-Organoid-With-Optic-Cups-768x645.jpg)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/08/2021 06:05:56

https://jrodthoughts.medium.com/deepminds-idea-to-build-neural-networks-that-can-replay-past-experiences-just-like-humans-do-f9d7721473ac
Quote
DeepMind’s Idea to Build Neural Networks that can Replay Past Experiences Just Like Humans Do
DeepMind researchers created a model to be able to replay past experiences in a way that simulate the mechanisms in the hippocampus.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/08/2021 04:44:25
I think this is a major milestone in our efforts of building a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2021 09:04:18
https://www.digitaltrends.com/computing/ai-leading-chip-design-revolution/
A.I. is leading a chip design revolution, and it’s only just getting started
Quote
For decades, constant innovation in the world of semiconductor chip design has made processors faster, more efficient, and easier to produce. Artificial intelligence (A.I.) is leading the next wave of innovation, trimming the chip design process from years to months by making it fully autonomous.

Google, Nvidia, and others have showcased specialized chips designed by A.I., and electronic design automation (EDA) companies have already leveraged A.I. to speed up chip design. Software company Synopsys has a broader vision: Chips designed by A.I. from start to finish.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/08/2021 09:13:50
Visa Enters Metaverse With First NFT Purchase
https://www.forbes.com/sites/ninabambysheva/2021/08/23/visa-enters-metaverse-with-first-nft-purchase/?sh=623ac6d668b3
Quote
On August 18, digital payments giant Visa spent $150,000 to buy a unique work of art, and in so doing quietly took its first step into the metaverse, a nascent online world that promises to transform the internet into a virtual reality.

Instead of canvas or marble, the pixelated artwork, named CryptoPunk 7610, is what’s known as a non-fungible token (NFT), a unique digital asset which, similarly to bitcoin, certifies the authenticity, ownership and provenance of any digital object written to a blockchain. One of the 10,000 24x24 pixel images of the CryptoPunk collection, generated algorithmically, Visa’s first NFT is an avatar of a female character, distinguishable by a mohawk, large green eyes and bright red lipstick.

However, the company didn’t actually custody the 49.5 ETH, paid for the token, or the asset itself. Instead, newly licensed bank, Anchorage, has helped facilitate the deal, and importantly became the first known U.S. bank to custody one of these novel assets.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/08/2021 16:05:39
We should be careful which metaverse we choose to live in

Quote

   The great thing about the future is you can make it up. If present day reality is messier than you had hoped, then you can construct an alternative one, where everything is much cleaner. So it is with the latest West Coast infatuation with the metaverse. Now that the Federal Trade Commission is hammering on Big Tech’s door and even the Taliban is using audio app Clubhouse, maybe it is time to add a shiny new dimension to the future. 

The term metaverse comes from Snow Crash, a 1992 science fiction novel by Neal Stephenson, in which human avatars and software daemons inhabit a parallel 3D universe. The term now has a life of its own and has cropped up recently in chief executive presentations from Microsoft’s Satya Nadella and Facebook’s Mark Zuckerberg. 

https://www.ft.com/content/bcac6b61-7b11-4469-99b7-c125311fa34d
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/08/2021 16:17:33
The good thing about virtualization is that it allows us to perform trial and error experimentations with less resources than doing them in real life. But to be useful, it must be related to objective reality, at least at some points.

Unicellular organisms perform those trial and error experiments all the time with their multiple duplicate copies. Some of them may survive in each generation, but most of them will die, which makes the experiment inefficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/08/2021 03:03:52
Quote
According to Boston Dynamics, Atlas uses “perception” to navigate the world. The company’s website states that Atlas uses “depth sensors to generate point clouds of the environment and detect its surroundings.” This is similar to the technology used in self-driving cars to detect roads, objects, and people in their surroundings.

This is another shortcut that the AI community has been taking. Human vision doesn’t rely on depth sensors. We use stereo vision, parallax motion, intuitive physics, and feedback from all our sensory systems to create a mental map of the environment. Our perception of the world is not perfect and can be duped, but it’s good enough to make us excellent navigators of the physical world most of the time.

https://venturebeat.com/2021/08/27/inside-boston-dynamics-project-to-create-humanoid-robots/
Title: Re: How close are we from building a virtual universe?
Post by: Eternal Student on 30/08/2021 03:41:06
Hi.
 
I'm a CS/P
  A Chartered Society of Physiotherapists  is what Google puts top of the list for the acronym.  So I've got to ask what is a CS  or a CSP?
Best Wishes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/08/2021 05:16:28
Well, depends what U mean, by build, and what size.
Just use standard definitions, unless stated otherwise.
Quote
build: construct (something) by putting parts or material together.
Quote
size: the relative extent of something; a thing's overall dimensions or magnitude; how big something is.

Actually, I have absolutely no doubt/s that our cosmos is a simulation, and that we are VR.
And I'm not the only one, who thinks so, of course.
(The universe is my real/physical/HW based/classical model and
  the cosmos   is my SW based/virtual/quantum model).

Quote
cos·mos: the universe seen as a well-ordered whole.
Quote
universe: all existing matter and space considered as a whole; the cosmos.
What makes you think that cosmos is virtual/quantum?
What makes you think that real/physical universe is classical?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/08/2021 05:21:40
The article shows how a virtual universe can be useful and practical.
https://scitechdaily.com/putting-a-super-cork-on-the-coronavirus-new-hope-in-the-battle-against-covid-19/
Quote
Therapeutic approach developed by Weizmann Institute scientists could spell new hope in the battle against COVID-19.

Even though vaccines may be steering the world toward a post-pandemic normal, a constantly mutating SARS-CoV-2 necessitates the development of effective drugs. In a new study published in Nature Microbiology, Weizmann Institute of Science researchers, together with collaborators from the Pasteur Institute, France, and the National Institutes of Health (NIH), USA, offer a novel therapeutic approach to combating the notorious virus. Rather than targeting the viral protein responsible for the virus entering the cell, the team of researchers addressed the protein on our cells’ membrane that enables this entry. Using an advanced artificial evolution method that they developed, the researchers generated a molecular “super cork” that physically jams this “entry port,” thus preventing the virus from attaching itself to the cell and entering it.

https://scitechdaily.com/new-achilles-heel-of-coronavirus-aptamer-molecule-attacks-coronavirus-in-a-novel-way
Quote
Active ingredient inhibits infection with so-called pseudoviruses in the test tube, as shown by study at the University of Bonn.

Scientists at the University of Bonn and the caesar research center have isolated a molecule that might open new avenues in the fight against SARS coronavirus 2. The active ingredient binds to the spike protein that the virus uses to dock to the cells it infects. This prevents them from entering the respective cell, at least in the case of model viruses. It appears to do this by using a different mechanism than previously known inhibitors. The researchers therefore suspect that it may also help against viral mutations. The study will be published in the journal Angewandte Chemie and is already available online.

The novel active ingredient is a so-called aptamer. These are short chains of DNA, the chemical compound that also makes up chromosomes. DNA chains like to attach themselves to other molecules; one might call them sticky. In chromosomes, DNA is therefore present as two parallel strands whose sticky sides face each other and that coil around each other like two twisted threads.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/08/2021 11:08:03
What makes me think our cosmos is a simulation? All the quantum paradoxes.
I have absolutely no doubt/s that something like that is possible only inside a computer, by a computer.
Have you considered a possibility that we have misunderstood something in those paradoxes? Some false assumptions, may be?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/08/2021 06:57:54
I/have wasted my life/time trying to explain all those paradoxes away, classically,
only to realise in my old age/the end that it can't be done.
Prior to Newton, movement of planets were impossible to explain naturally. Even Newton thought that electromagnetic phenomena were too mysterious.

Whenever we get unexpected result, there must be at least one false assumption that we've made, either explicitly or implicitly. We just need to identify all the assumptions that we've employed to get our expectations, and then identify which of them are not necessarily true.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/08/2021 07:00:32
"A quantum experiment suggests there’s no such thing as objective reality | MIT Technology Review" https://www.technologyreview.com/2019/03/12/136684/a-quantum-experiment-suggests-theres-no-such-thing-as-objective-reality
Quote
Physicists have long suspected that quantum mechanics allows two observers to experience different, conflicting realities. Now they’ve performed the first experiment that proves it.

Quote
The idea that observers can ultimately reconcile their measurements of some kind of fundamental reality is based on several assumptions. The first is that universal facts actually exist and that observers can agree on them.

But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.

If there is an objective reality that everyone can agree on, then these assumptions all hold.

But Proietti and co’s result suggests that objective reality does not exist. In other words, the experiment suggests that one or more of the assumptions—the idea that there is a reality we can agree on, the idea that we have freedom of choice, or the idea of locality—must be wrong.

Of course, there is another way out for those hanging on to the conventional view of reality. This is that there is some other loophole that the experimenters have overlooked. Indeed, physicists have tried to close loopholes in similar experiments for years, although they concede that it may never be possible to close them all.
Nevertheless, the work has important implications for the work of scientists. “The scientific method relies on facts, established through repeated measurements and agreed upon universally, independently of who observed them,” say Proietti and co. And yet in the same paper, they undermine this idea, perhaps fatally.


Claiming that there's no objective reality is extraordinary, hence requires extraordinary evidence. But we keep seeing this kind of researches from time to time. Perhaps that's what it takes to get more attention.
Before they put the blame on the existence of objective reality, perhaps they should scrutinize their experimental setups and theoretical model that they used to explain the situation.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 05:27:19
U may care to see (The Incredible) Halc's outstanding Best Answer to my Mach-Zehnder interferometer question,
and our no holds barred wrestling contest/match afterward/s. TIH vs TCC! I think I gave just as good as I got.


I've had a plan to make Mach-Zehnder interferometer using microwave for a while now, but it's kept pushed aside by other things. I'm curious of what would happen if the type of beam splitters are changed, e.g. replaced by polarizers.

I've already recorded some other experiments using radio, microwave, and laser. I just haven't had time to edit and upload all of the videos.

I'm affraid I'll just get even busier ahead, since I've got freelance side jobs to design the instrumentation and automation control system for production process plants. That's the kind of problem that prompted me to create this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 06:55:40
Tesla's AI day reveals many things which show how close we are from building a virtual universe.

Tesla Transformers! Why is Vector Space so critical to FSD?


Quote
During Tesla's AI day, Andrej Karpathy, director of AI and autopilot vision at Tesla, went into a great deal of detail about how and why Tesla engineers have expended massive effort to transform video images from Tesla cameras into abstracted vector spaces. The way they achieved this, and the results, are astounding. From Hydranets to Transformers, to conversion to vector space, Karpathy explained how Tesla vision full self driving takes images from the cameras and converts them to a depth sorted 2D top down map of the surroundings--all in real time!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 10:27:52
Your Tesla can plan ahead! Does that mean it's conscious?


Quote
During Tesla AI day on August 19th, Ashok Elluswamy, Tesla’s director of autopilot software, demonstrated that Teslas driving the FSD (Full Self Driving) beta 9 have an almost eerie ability to plan ahead for issues that might arise while driving. Some of this comes down to basic physics--knowing how heavy and how big your "ego" car is--but a lot of you Telsa's ability to plan comes down to the car route planning... for all the other agents in the scene (other cars, pedestrians, bikes, etc). This is crazy--and it got me thinking about a book by Christopher McDougall, Born to Run, which posits that human consciousness arose on the plains of Africa as early humanoids had to place an agent model (a version of their own brains) into that of their hunting companions and the target prey.
But wait, you say, this is just what a Tesla is doing when it route plans. Might your Tesla actually be conscious?!

We are going to see machines with self awareness, and capability to understand the behavior of other conscious agents. They can also choose appropriate instrumental goals to help achieving their terminal goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2021 13:10:38
On presentation or user interface front, we've got this.


Now Games Can Look Like Pixar Movies - Unreal Engine 5
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/09/2021 02:01:12
Virtual universe in the large scale.
Quote
Forget about online games that promise you a "whole world" to explore. An international team of researchers has generated an entire virtual universe, and made it freely available on the cloud to everyone.

Uchuu (meaning "outer space" in Japanese) is the largest and most realistic simulation of the universe to date. The Uchuu simulation consists of 2.1 trillion particles in a computational cube an unprecedented 9.63 billion light-years to a side. For comparison, that's about three-quarters the distance between Earth and the most distant observed galaxies. Uchuu reveals the evolution of the universe on a level of both size and detail inconceivable until now.


https://phys.org/news/2021-09-largest-virtual-universe-free-explore.html
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/09/2021 06:16:49
And here's the virtual universe closer to our everyday lives. The author is good at explaining technical concepts to lay persons.
How does Tesla manage to label ALL THAT DATA? And why does it even matter?? AI Day Part 6
Quote
On August 19th, during Tesla AI day, Andrej Karpathy, director of artificial intelligence and autopilot vision, dove into a topic that is distinctly not sexy, but absolutely necessary for modern machine learning: collecting and especially labeling data for training.
After covering how Tesla Vision converts 2D images into 3D vector space, and discussing how the cars can plan ahead not just for them, but for all other agents in the scene (you can watch my previous videos, linked above, for much more on this), Dr. Karpathy broached the topic of how Tesla deals with the mountains of data it’s 2 million car strong fleet produces now.
And while I thought I’d be bored by this section of the talk, I was, frankly, blown away by how brilliant Tesla’s data labeling strategy is, and also how much time, person power, and money Tesla has and is putting into labelling the best, most targeted data possible. Along with the incredible neural network architecture, this data labeling is what is enabling Tesla to achieve what seemed impossible just a short time ago: full autonomous driving using only cameras!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2021 05:07:22
OpenAI Codex: Just Say What You Want!

The paper "Evaluating Large Language Models Trained on Code" is available here:
https://openai.com/blog/openai-codex/

When we got technicality out of our way, we can be more focused on determining and achieving our terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/09/2021 05:29:25
https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253
Are there any limits to large neural networks?
Quote
OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.
Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That’s why OpenAI took over this quest. To ensure it’d benefit everyone evenly: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
Quote
The holy trinity — Algorithms, data, and computers
OpenAI believes in the scaling hypothesis. Given a scalable algorithm, the transformer in this case — the basic architecture behind the GPT family —, there could be a straightforward path to AGI that consists of training increasingly larger models based on this algorithm.
But large models are just one piece of the AGI puzzle. Training them requires large datasets and large amounts of computing power.
Data stopped being a bottleneck when the machine learning community started to unveil the potential of unsupervised learning. That, together with generative language models, and few-shot task transfer, solved the “large datasets” problem for OpenAI.
They only needed huge computational resources to train and deploy their models and they’d be good to go. That’s why they partnered with Microsoft in 2019. They licensed the big tech company so they could use some of OpenAI’s models commercially in exchange for access to its cloud computing infrastructure and the powerful GPUs they needed.
Quote
What can we expect from GPT-4?
100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.
GPT-4 will have as many parameters as the brain has synapses.
Quote
OpenAI has been working nonstop in exploiting GPT-3’s hidden abilities. DALL·E was a special case of GPT-3, very much like Codex. But they aren’t absolute improvements, more like particular cases. GPT-4 promises more. It promises the depth of specialist systems like DALL·E (text-images) and Codex (coding) combined with the width of generalist systems like GPT-3 (general language).
And what about other human-like features, like reasoning or common sense? In that regard, Sam Altman says they’re not sure but he remains “optimistic.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 13:27:51
G'duy, neighbour. I'm from Oz/tralia. The fabled land of Oz.
Thank U for Ur MZI replies. U look to me like an electrical engineer, type.
Both Ur names are Islamic. I assume U're a good Muslim/believer.
Didn't U ever wonder how something like that can be implemented? Only in SW.
Good day, my neighbor.
My current work is more about plant control, automation and instrumentation, although I also have experience in leading in house utility plant in my site, as well as electrical maintenance team.
As you can see in my signature, unexpected results come from false assumptions. Perhaps you can check my other threads about philosophy and morality.
Something like that can also happen in real life.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 13:30:31
Here is another take on Tesla's AI day. It shows how close we are from building a virtual universe.

Watch Tesla’s Self-Driving Car Learn In a Simulation!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 13:33:15
"There is no classical explanation, so the universe is a simulation".
The classical explanation is not a single thing/version. I learned from the history of scientific progress. There might be a version which can give satisfactory answers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 17:43:48
NO! There is/are no classical explanation/s, for quantum paradoxes/phenomena.
What's your definition of classical physics?
What makes quantum physics different than classical counterpart?
Do you know that physics theories evolve over time? For both classical as well as quantum theories?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 17:45:27
But/t there is an explanation and it's a SW based universe/cosmos.
The software must run on the hardware. How does the hardware work?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/09/2021 18:38:41
People often say that Newtonian mechanics is classical physics. So is Maxwellian electromagnetic theory. But they are incompatible with each other.
Newtonian optics and Huygen's optics are both classical theories, but they are also incompatible with each other.
Based on its name, quantum physics are different from classical ones due to quantization of energy transfers. In contrast, classical physics don't recognize such quantification. Although initially, Planck introduced his constant merely as proportionality factor, which says that a unit of oscillator on black body needs more energy to produce radiation with higher frequency. Interpreting it as quantification of energy transfer came later, proposed by Einstein. Modern quantum theory is significantly different than earlier versions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2021 03:02:24
I have no problem in accepting new theories. As long as it can explain observations better than the existing theory. I. e. it can explain more observations with less assumptions.

But if a theory forces us to abandon causality, I think it's time we need to look for some better alternatives. It's more likely that some errors have been made in deriving the theory, or interpreting the observation.

Consciousness works relying on the existence of causality. We make plans because we believe that our actions influence the results, not the other way around. And our own consciousness is the only unquestionable evidence of our own existence. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2021 07:21:47
100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.
GPT-4 will have as many parameters as the brain has synapses.

How Computationally Complex Is a Single Neuron?
Quote
Our mushy brains seem a far cry from the solid silicon chips in computer processors, but scientists have a long history of comparing the two. As Alan Turing put it in 1952: “We are not interested in the fact that the brain has the consistency of cold porridge.” In other words, the medium doesn’t matter, only the computational ability.

Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep neural networks. As their name suggests, deep neural networks were inspired by the real neural networks in the brain, with the nodes modeled after real neurons — or, at least, after what neuroscientists knew about neurons back in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?

To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902
Quote
“We tried many, many architectures with many depths and many things, and mostly failed,” said London. The authors have shared their code to encourage other researchers to find a clever solution with fewer layers. But, given how difficult it was to find a deep neural network that could imitate the neuron with 99% accuracy, the authors are confident that their result does provide a meaningful comparison for further research. Lillicrap suggested it might offer a new way to relate image classification networks, which often require upward of 50 layers, to the brain. If each biological neuron is like a five-layer artificial neural network, then perhaps an image classification network with 50 layers is equivalent to 10 real neurons in a biological network.
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 17/09/2021 02:23:59
People often say that Newtonian mechanics is classical physics. So is Maxwellian electromagnetic theory. But they are incompatible with each other.
Classic means non-quantum, and not all non-quantum thoeries are compatible with each other. Under classical physics, objects exist even unmeasured. They have a defined state at all times even if it isn't known. The moon is there even when nobody is looking at it, so to speak. Cause comes before effect and information cannot travel faster than light (the latter not being true under Newtonian physics).
None of this is necessarily the case with quantum mechanics. The rules differ from one interpretation to the next, but the empirical measurements do not. If one is to implement a simulation, one must choose an interpretation to simulate. Without that, you'd be implementing a thing without any design.

I've not read most of this thread. It's quite long, but typical of such assertions, there is never an eye given to looking for problems with the proposal. Only positive evidence is presented. This is known as the selection bias fallacy.
Address the problems. Actively seek them, else the idea will be shot down effortlessly when other do.

Have U heard about the Quantum Eraser?
Either the photons (can) travel back in time or the universe is implemented in SW
This is incorrectly stated. No interpretation of QM suggest either. The choice is: Either there is reverse causality (effect before cause) or there is no state in the absence of measurement. The quantum eraser experiments are actually really hard evidence against a simulation.
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back). The simulation could never make forward progress.

Alternatively one could simulate a local interpretation of quantum mechanics, none of which require reverse causality like that. But the problem is you sacrifice state. If there's no current state, how can the next one be computed?
I cannot think of an algorithm that would simulate either kind of interpretation, and it has been proven that there cannot be one that has both real state and also locality. That means that no classic algorithm can implement quantum mechanics at all, and thus any simulation would have to be at a classic level, which sounds intuitively plausible until one recognizes how much quantum effects effect just about everything we see every day. Without that, rainbows, electronics and nerve cells cannot work. The simulation would need to glean the purpose of every effect and change the physics accordingly.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/09/2021 09:40:19
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).
Simulations can usually also work backward. Based on current states, previous states can be calculated, just like next states. That's the basis for Laplace's demon.

Which experiment are you referring to?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/09/2021 05:40:15
Any of the quantum eraser experiments, which, given an interpretation where say photons have position and state which 'in flight', demonstrate that effects now are a function not only of immediate prior local state, but distant state and events that don't take place until long into the future.
Quantum eraser experiments can be explained without discarding causality using wave mechanics with appropriately chosen assumptions. I discuss this problem in more detail in another thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/09/2021 05:50:51
You're proposing that our universe is a simulation that is running blackwards?
I'm not the one who proposed that our universe is a simulation. IMO, it would generate more unnecessary complexity, rather than offering solutions to our problems.
A simulation is a simplified model to represent the real system which is presumably more complex.The simulation can help us predict the result of trial and error with less resources compared to doing them in real systems.The simulation doesn't have to use computer. It just happens that computer simulation is easier to duplicate and modify as needed.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/09/2021 07:58:56
Musk argues for a virtual reality, not a simulation, despite whatever word he might choose for it.
He literally used the word simulation in interviews and tweets. He's likely influenced by Nick Bostrom's idea.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/09/2021 05:42:17
I've not read most of this thread. It's quite long, but typical of such assertions, there is never an eye given to looking for problems with the proposal. Only positive evidence is presented. This is known as the selection bias fallacy.
Address the problems. Actively seek them, else the idea will be shot down effortlessly when other do.

The background of opening this thread can be found in my opening statement.
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.
I have started another thread related to this subject  asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.
In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.
https://en.m.wikipedia.org/wiki/Digital_twin

Just like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.



The problem is, without adequately accurate, precise, and relevant virtual universe, we are expected to face many surprises in the future. They would make our plans less effective and efficient, which in turn makes it harder to achieve our goals.
 
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.
Quote
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
https://pathmind.com/wiki/neural-network
Quote
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.

That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 21/09/2021 04:39:22
Musk argues for a virtual reality, not a simulation, despite whatever word he might choose for it.
He literally used the word simulation in interviews and tweets. He's likely influenced by Nick Bostrom's idea.
Yes, well both words are often used to refer to either concept, but the two concepts are quite distinct, and if the person proposing it doesn't know which is which, then they haven't really thought about it much.

The background of opening this thread can be found in my opening statement.
You seem to be proposing what I call a VR, which is an artificial sensory feed into one or more real people or other minds, each of which controls an avatar in the simulated world. You link in your OP to an article on digital twins, which is exactly this. The article even uses the word avatar.  Musk also proposes such a thing.
It's dualism. The non-VR simulation (what Bostrom proposes) is monism: Nobody external controlling and of the simulated things. They are free to do what they want instead of what some puppeteer wants. There are empirical tests for both, but not the same ones.

Anyone who proposes the VR idea but then references Bostrom's work (or v-v) doesn't really know what they're talking about. The latter for instance doesn't require heavy computing power. It can proceed fast or slow, be done on pencil and paper, and even be shelved for months at time when server time is more available.  A VR can't do that and must keep up with real time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2021 07:08:13
You can say that I have selection bias. But I can't help selecting my information sources from those who've made accurate predictions and use sensible models of objective reality to predict the future, and help in making better decisions.
Virtual presentation to the Council of State Governments on the occasion of the CSG East 2021 Annual Meeting.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/09/2021 07:12:35
You seem to be proposing what I call a VR,
You can even read it in the title. We can say that the virtual universe is the result of integration from many VR systems. Lately we can see/read/hear/watch the news about Metaverse, which will combine several VR systems into one integrated system, such as gaming, working/collaboration, education, entertainment, competition, advertisements, even financial systems.

IMO, objects in simulations don't represent any specific/particular instance of objects in reality. They may have some resemblances though. On the other hand, some objects in VR must be an avatar representing a particular real object. So, your thinking is in line with mine.

But the differences can be less obvious in some circumstances. In training mode, AlphaGo runs as simulation, with pieces of Go moves around without representing any particular pieces of Go in reality. But in the trournament against Lee Sedol, it becomes a VR, where some of the pieces must represent Lee's pieces in real world.
Title: Re: How close are we from building a virtual universe?
Post by: BilboGrabbins on 22/09/2021 16:52:16
If there is a monopole, then maybe in a hundred
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 23/09/2021 01:03:35
You can say that I have selection bias.
And then you select another video in support instead of one identifying the issues.

Nobody has put together a VR where the guy doing it is unaware it has been done, and is unable to exit it if he wants.
What if he has to use the restroom (in reality)? Nobody's been in one longer than they can hold their bladder.
Sure, you can jam in a catheter, but how did you get in this virtual reality in the first place without knowing it?  Are all the people you meet virtually controlled avatars like yourself, or are most of them NPC's or what? What about dogs or birds or gnats? What if I want to be one of those?

In training mode, AlphaGo runs as simulation, with pieces of Go moves around without representing any particular pieces of Go in reality. But in the trournament against Lee Sedol, it becomes a VR, where some of the pieces must represent Lee's pieces in real world.
A go-playing computer (AI or not) is not a VR. I suppose it could have a VR interface to let you experience playing the game with a physical-looking character, but to play an external entity, all it needs is a USB cord.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 04:33:08
Here's another progress we made so far.
Quote
https://pub.towardsai.net/facebooks-parlai-is-a-framework-for-building-human-like-conversational-agents-99711c351fc9
Conversational interfaces powered by natural language processing(NLP) have been at the center of the artificial intelligence(AI) revolution of the last few years. When we see the advancements in digital assistants such as Siri or Alexa, we might be tempted to think that conversational applications are a solved problem. That couldn’t be further from the truth. The current generation of conversational interfaces is far from simulating human-like dialogues. Building advanced NLP systems remains an incredibly challenging task that. To address that challenge, Facebook open sourced ParlAI, a platform for advancing the evaluating of NLP systems. Recently, ParlAI got an update with new models, datasets, and a fun bot to play with which I would like to cover in this two-part article. The first part of the article will introduce the core concepts behind ParlAI while the second will focus on some of the newest capabilities targeted to advance dialogue research.
...
The ultimate goal of NLP is to enable interactions with chatbots that mimic the dynamics of human conversations. For that to happen, we need systems that can go beyond understanding a single sentence or taking discrete actions. Advanced conversational applications require understanding long-form sentences in specific contexts while balancing human-like aspects such as specificity and empathy.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 04:33:57
And then you select another video in support instead of one identifying the issues.
What's your issue with that?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 04:37:24
Sure, you can jam in a catheter, but how did you get in this virtual reality in the first place without knowing it?
Perhaps kidnapped when asleep, and use some anesthesia.
The biological agent can be just organoid brain, never really have complete body in the first place.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/09/2021 07:55:19
A go-playing computer (AI or not) is not a VR. I suppose it could have a VR interface to let you experience playing the game with a physical-looking character, but to play an external entity, all it needs is a USB cord.
The transition from simulation to VR is not a single step function. It's more gradual like greyscale.
Let's start with a system which you can confidently say as  VR. Then reduce its resolution in visualization, such as pixel number in the viewing window, or box size like in Minecraft. How low can we go until it stops being a VR?
Another route to get the minimum requirement for VR is by reducing the degree of freedom that the external agent has to change the virtual objects. In 4d theater, the external agents have no control over the virtual objects. Other systems have various levels of control. 
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 24/09/2021 17:51:27
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).
Simulations can usually also work backward.
But virtual realities cannot.
So such reverse-causality experiments seem to be a decent falsification of the VR hypothesis.

Perhaps kidnapped when asleep, and use some anesthesia.
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Billions of people exit a world with a capability of initiating such a VR (and kidnapping billions of people at once) and involuntarily enter a world where that capability isn't there at all. So it's not a reality in any way similar to the one they were in a moment ago. Yea, you'd notice that. Think it through before suggesting something like that.

Quote
The biological agent can be just organoid brain, never really have complete body in the first place.
All these articles that your reference (digital twin, Musk's assertions, etc) are claims that it is a world like ours, humans doing it to other humans, not disembodied minds put into non-native virtual bodies.
If you deny those proposals, then it becomes a straight up BIV scenario, subject to a god-of-the-gaps fallacy. Invent a higher realm beyond empirical investigation, and then hand-wave all the inconsistencies to that layer saying they're dealt with there. It's a cop out for actual analysis. The arguments against it are same as the BIV counterarguments. Most of the models collapse to solipsism.

How is any of this any different from basic Chalmers dualism then? Anything with an experiencer is conscious. Anything not is a philosophical zombie, or P-zombie, which is the equivalent to an NPC* in a virtual reality. Chalmers doesn't go so far as to claim that God is so weak that he needs a big computer to provide his virtual experience to all the minds he puts into it.

* I notice that you didn't respond to my NPC questions in my prior post. NPC is a standard video game term.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 06:14:35
But virtual realities cannot.
So such reverse-causality experiments seem to be a decent falsification of the VR hypothesis.
What's the VR hypothesis?
Simulation can run backward, slowed down, or fast forward because every object in it is under its control. VR can't access all parameters of outside world. Even if a VR is advanced enough to manipulate a brainoid using electrochemical signals, someone outside can simply crash it which destroy the VR's plan.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 06:16:33
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Do you always realize when you're dreaming?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 06:36:01
Are all the people you meet virtually controlled avatars like yourself, or are most of them NPC's or what? What about dogs or birds or gnats? What if I want to be one of those?
It looks like you forget that I'm not suggesting that we are currently living in a simulation nor VR.
If the VR is good enough, we can't distinguish between NPCs and avatars unless we can go outside of VR and meet them in person.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 07:00:32
All these articles that your reference (digital twin, Musk's assertions, etc) are claims that it is a world like ours, humans doing it to other humans, not disembodied minds put into non-native virtual bodies.
The digital twin is currently a real world commercial product. Many chemical companies are already using it.
It's totally different from Musk's assertion that we're living in a simulation. He may not be serious about it, considering his efforts to make humans multiplanetary. What's the point if we're merely a simulation?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/09/2021 09:15:49
The transition from simulation to VR is not a single step function. It's more gradual like greyscale.
Let's start with a system which you can confidently say as  VR. Then reduce its resolution in visualization, such as pixel number in the viewing window, or box size like in Minecraft. How low can we go until it stops being a VR?
Another route to get the minimum requirement for VR is by reducing the degree of freedom that the external agent has to change the virtual objects. In 4d theater, the external agents have no control over the virtual objects. Other systems have various levels of control. 
Can it still be called VR if it lacks the sensation of touch, heat,  taste,  and smell? What if it exclude the effects of ultraviolet and infrared light?
Title: Re: How close are we from building a virtual universe?
Post by: Halc on 27/09/2021 02:26:56
It seems I’m missing a lot of these posts.  Not sure why.
It looks like you forget that I'm not suggesting that we are currently living in a simulation nor VR.
Oh OK. Many of the people you reference are suggesting exactly that. My counterarguments need to be addressed by them, but the articles I see only seem to seek attention for the idea instead of identify a plausible model that holds up to scrutiny.

Quote
If the VR is good enough, we can't distinguish between NPCs and avatars unless we can go outside of VR and meet them in person.
There’s a test. Decisions made in my character’s brain are suppressed so my will can override it. So detect that: There is a total disconnect between brain and voluntary action, whereas the NPC has a functional connection between the two.

Concerning how one enters the VR:
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Do you always realize when you're dreaming?
Pretty irrelevant. Reality never feels like a dream. Dreams don’t rewrite all my memories. If the machine could do that, no experience feed would be necessary. All it would need to do put you in a state of having remembered them. Solves the bladder issue too. It boils down to last-Tuesdayism then. There’s no proof that the universe wasn’t created last Tuesday, or 3 seconds ago for that matter.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2021 03:21:42
Reality never feels like a dream.
Some dreams can feel like reality.
In some conditions, reality can feel like a dream, like when we're under influence of psychedelics. Lack of sleep can also do that.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2021 03:26:27
There’s no proof that the universe wasn’t created last Tuesday, or 3 seconds ago for that matter.
We can rely on Occam's razor for practical matters. What do we get by believing that universe was created last Tuesday?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2021 08:19:33
Question is, what do you learn by attempting to falsify that the universe was created last Tuesday? If you shorten it to 'just now', it boils down to a Boltzmann brain. Just as hard to falsify that one.
Not much. It's just impractical and wasting resources without apparent benefits. So it would be better to just ignore it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/10/2021 07:17:00
https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253?gi=98c60e44681b
GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3
Are there any limits to large neural networks?

Quote
OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.
Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That’s why OpenAI took over this quest. To ensure it’d benefit everyone evenly: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
However, the magnitude of this problem makes it arguably the single biggest scientific enterprise humanity has put its hands upon. Despite all the advances in computer science and artificial intelligence, no one knows how to solve it or when it’ll happen.
Some argue deep learning isn’t enough to achieve AGI. Stuart Russell, a computer science professor at Berkeley and AI pioneer, argues that “focusing on raw computing power misses the point entirely […] We don’t know how to make a machine really intelligent — even if it were the size of the universe.”
OpenAI, in contrast, is confident that large neural networks fed on large datasets and trained on huge computers are the best way towards AGI. Greg Brockman, OpenAI’s CTO, said in an interview for the Financial Times: “We think the most benefits will go to whoever has the biggest computer.”
And that’s what they did. They started training larger and larger models to awaken the hidden power within deep learning. The first non-subtle steps in this direction were the release of GPT and GPT-2. These large language models would set the groundwork for the star of the show: GPT-3. A language model 100 times larger than GPT-2, at 175 billion parameters.
GPT-3 was the largest neural network ever created at the time — and remains the largest dense neural net. Its language expertise and its innumerable capabilities were a surprise for most. And although some experts remained skeptical, large language models already felt strangely human. It was a huge leap forward for OpenAI researchers to reinforce their beliefs and convince us that AGI is a problem for deep learning.
Quote
Unlike GPT-3, it probably won’t be just a language model. Ilya Sutskever, the Chief Scientist at OpenAI, hinted about this when he wrote about multimodality in December 2020:
“In 2021, language models will start to become aware of the visual world. Text alone can express a great deal of information about the world, but it is incomplete, because we live in a visual world as well.”
We already saw some of this with DALL·E, a smaller version of GPT-3 (12 billion parameters), trained specifically on text-image pairs. OpenAI said then that “manipulating visual concepts through language is now within reach.”
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/10/2021 13:12:05
https://psyche.co/ideas/the-brain-has-a-team-of-conductors-orchestrating-consciousness
Quote
This new framework points to a view of the brain as a fusion of the local and the global, arranged in a hierarchical manner. In this context, some researchers including Marsel Mesulam have suggested that the human brain is in fact hierarchically organised, a view that fits well with our orchestra metaphor. Yet, given the distributed nature of the brain hierarchy, there is unlikely to be just a single ‘conductor’. Instead, in 1988 the psychologist Bernard Baars proposed the concept of a ‘global workspace’, where information is integrated in a small group of brain regions (or ‘conductors’) before being broadcast to the whole brain.
Quote
This processing becomes ever more complex; higher up in the hierarchy, brain regions integrate all the small segments that make up an object, such as a human face. In his book The Man Who Mistook his Wife for a Hat (1985), Oliver Sacks wrote about what happens if you have a stroke or lesion to this brain area: namely, you’re no longer able to recognise faces.

Higher still in the hierarchical processing of environmental information there’s more integration, fusing different ongoing sensory modalities (such as sight and sound) with previous memories. This processing is further influenced by reward and expectations and by any surprising deviations from previous experiences. In other words, at the highest level of the hierarchy, the ‘global workspace’ must somehow integrate information from perceptual, long-term memory and evaluative and attentional systems to orchestrate goal-directed behaviour.

The information flow within this hierarchy is highly dynamic; not just bottom-up but also top-down. In fact, recurrent interactions shape the functional processing underlying cognition and behaviour. Much of this information flow follows the underlying anatomy in the structural connections between brain regions but, equally, the information flow is largely unconstrained by this anatomical wiring.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/10/2021 07:31:52
https://jrodthoughts.medium.com/what-is-meta-reward-learning-4badbf2c95a8
Quote
Reinforcement learning has been at the center of some of the biggest artificial intelligence(AI) breakthroughs of the last five years. In mastering games like Go, Quake III or StarCraft, reinforcement learning models demonstrated that they can surpass human performance and create unique long-term strategies never explored before. Part of the magic of reinforcement learning relies on regularly rewarding the agents for actions that lead to a better outcome. That models works great in dense reward environments like games in which almost every action correspond to a specific feedback but what happens if that feedback is not available? In reinforcement learning this is known as sparse rewards environments and, unfortunately, it’s a representation of most real-world scenarios. A couple of years ago, researchers from Google published a new paper proposing a technique for achieving generalization with reinforcement learning that operate in sparse reward environments.

Quote
The overall challenge of reinforcement learning in sparse reward environment relies on achieving good generalization with limited feedback. More specifically, the process of achieving robust generalization in sparse reward environments can be summarized in two main challenges:
1) The Exploration — Exploitation Balance: An agent that operates using sparse rewards needs to balance when to take actions that lead to an immediate outcome versus when to explore the environment further in order to gather better intelligence. The exploration-exploitation dilemma is the fundamental balance that guides reinforcement learning agents.
2) Processing Unspecified Rewards: The absence of rewards in an environment is as difficult to manage as the surfacing of unspecified rewards. In sparse reward scenarios, agents are not always trained on specific types of rewards. After receiving a new feedback signal, a reinforcement learning agent needs to assess whether this one constitutes an indication of success or failure which is not always trivial.
Quote
Introducing MeRL
Meta Rewards Learning(MeRL) is Google’s proposed method for teaching reinforcement learning agents to generalize in environments with sparse rewards. The key contribution of MeRL is to effectively processing unspecified rewards without affecting the agent’s generalization performance. In our maze game example, an agent might accidentally arrive to a solution but, if it learns to perform spurious actions during training, it is likely to fail when provided with unseen instructions. To address this challenge, MeRL optimizes a more refined auxiliary reward function, which can differentiate between accidental and purposeful success based on features of action trajectories. The auxiliary reward is optimized by maximizing the trained agent’s performance on a hold-out validation set via meta learning.

I'd like to share this great article here. It contains important information which is also relevant to my other threads about universal morality and terminal goal. I decided to post it here because it emphasizes on technicality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/10/2021 08:46:35
Quote
https://venturebeat.com/2021/10/12/deepmind-is-developing-one-algorithm-to-rule-them-all/

The birth of neural algorithmic reasoning
Charles Blundell and Petar Veličković both hold senior research positions at DeepMind. They share a background in classical computer science and a passion for applied innovation. When Veličković met Blundell at DeepMind, a line of research known as Neural Algorithmic Reasoning (NAR), was born, after the homonymous position paper recently published by the duo.

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

The article shows how close we are from building a virtual universe of our own source of consciousness.

Quote
The ultimate goal is to build an observatory that can integrate data from all these projects into one grand, unified picture. Four years ago, with that in mind, researchers at the big-brain projects got together to create the International Brain Initiative, a loose organization with the principal task of helping neuroscientists to find ways to pool and analyse their data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/10/2021 12:39:18
https://www.nature.com/articles/d41586-021-02661-w
How the world’s biggest brain maps could transform neuroscience
Quote

Scientists around the world are working together to catalogue and map cells in the brain. What have these huge projects revealed about how it works?


Imagine looking at Earth from space and being able to listen in on what individuals are saying to each other. That’s about how challenging it is to understand how the brain works.

From the organ’s wrinkled surface, zoom in a million-fold and you’ll see a kaleidoscope of cells of different shapes and sizes, which branch off and reach out to each other. Zoom in a further 100,000 times and you’ll see the cells’ inner workings — the tiny structures in each one, the points of contact between them and the long-distance connections between brain areas.

Scientists have made maps such as these for the worm1 and fly2 brains, and for tiny parts of the mouse3 and human4 brains. But those charts are just the start. To truly understand how the brain works, neuroscientists also need to know how each of the roughly 1,000 types of cell thought to exist in the brain speak to each other in their different electrical dialects. With that kind of complete, finely contoured map, they could really begin to explain the networks that drive how we think and behave.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/10/2021 13:37:17
Quote
Last year, the Max Planck Institute for Intelligent Systems organized the Real Robot Challenge, a competition that challenged academic labs to come up with solutions to the problem of repositioning and reorienting a cube using a low-cost robotic hand. The teams participating in the challenge were asked to solve a series of object manipulation problems with varying difficulty levels.
https://techxplore.com/news/2021-10-robotic-dexterous-skills-simulations-real.html

Quote
"Our objective was to use learning-based methods to solve the problem introduced in last year's Real Robot Challenge in a low-cost manner," Animesh Garg, one of the researchers who carried out the study, told TechXplore. "We are particularly inspired by previous work on OpenAI's Dactyl system, which showed that it is possible to use model free Reinforcement Learning in combination with Domain Randomization to solve complex manipulation tasks."

Quote
"The process we followed consists of four main steps: setting up the environment in physics simulation, choosing the correct parameterization for a problem specification, learning a robust policy and deploying our approach on a real robot," Garg explained. "First, we created a simulation environment corresponding to the real-world scenario we were trying to solve."

 It shows that having a relevant, accurate, and precise virtual universe can help improve the efficiency of our efforts to achieve goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/10/2021 14:55:34
An update of current progress.

Google's Gated Multi-Layer Perceptron Outperforms Transformers Using Fewer Parameters
https://www.infoq.com/news/2021/10/google-mlp-vision-language/
Quote
Researchers at Google Brain have announced Gated Multi-Layer Perceptron (gMLP), a deep-learning model that contains only basic multi-layer perceptrons. Using fewer parameters, gMLP outperforms Transformer models on natural-language processing (NLP) tasks and achieves comparable accuracy on computer vision (CV) tasks.

The model and experiments were described in a paper published on arXiv. To investigate the necessity of the Transformer's self-attention mechanism, the team designed gMLP using only basic MLP layers combined with gating, then compared its performance on vision and language tasks to previous Transformer implementations. On the ImageNet image classification task, gMLP achieves an accuracy of 81.6, comparable to Vision Transformers (ViT) at 81.8, while using fewer parameters and FLOPs. For NLP tasks, gMLP achieves a better pre-training perplexity compared with BERT, and a higher F1 score on the SQuAD benchmark: 85.4 compared to BERT's 81.8, while using fewer parameters.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/10/2021 07:39:48

Quote
Tesla has redefined how numbers are formatted for computers--especially for deep neural network training! In a recent white paper, Tesla proposed the CFloat format as a standard. What is CFloat? How are numbers stored in a computer? And what does all this have to do with bandwidth and memory and efficiency? Let's go into full nerd mode and find out!

Here's an example of real world application of information specifications: relevance, accuracy, and precision. To achieve efficiency, those parameters need to be balanced.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/11/2021 06:12:02
China - Surveillance state or way of the future?
Quote
China is building a huge digital surveillance system. The state collects massive amounts of data from willing citizens: the benefits are practical, and people who play by the rules are rewarded.

Critics call it "the most ambitious Orwellian project in human history." China's digital surveillance system involves massive amounts of data being gathered by the state. In the so-called "brain" of Shanghai, for example, authorities have an eye on everything. On huge screens, they can switch to any of the approximately one million cameras, to find out who’s falling asleep behind the wheel, or littering, or not following Coronavirus regulations. "We want people to feel good here, to feel that the city is very safe," says Sheng Dandan, who helped design the "brain." Surveys suggest that most Chinese citizens are inclined to see benefits as opposed to risks: if algorithms can identify every citizen by their face, speech and even the way they walk, those breaking the law or behaving badly will have no chance. It’s incredibly convenient: a smartphone can be used to accomplish just about any task, and playing by the rules leads to online discounts thanks to a social rating system.

That's what makes Big Data so attractive, and not just in China. But where does the required data come from? Who owns it, and who is allowed to use it? The choice facing the Western world is whether to engage with such technology at the expense of social values, or ignore it, allowing others around the world to set the rules.
We need to determine and prioritize which social values are the most important, and which are expendable? It requires identification of common terminal goals. The universal terminal goal is the most common of them all.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/11/2021 05:37:15
https://scitechdaily.com/surprisingly-smart-artificial-intelligence-sheds-light-on-how-the-brain-processes-language/
Quote
They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/11/2021 10:30:33
http://email.mg.lesserwrong.com/c/eJw9jtsKgzAQRL8mvhk2G2P0IQ-F4n_k5qVVU5JY-_nVCoVhl1lmD-OU7rGteDEph9KDrWxpLNjScWhKaRpZSnRgmNAoeiQVzD4lH_cY1oHasBSjsjXq2rem6i3X2La84QZAAmus9EKLYlZjzq9E-I1gd2jfd3pi_pDjFrZMeLfFmfD7lUZx5sX5cQwdP9ObhjhczqTfRsYYBUAGRVSPLWXq9DrR8DyKDoue5pP-BRrdRE8

Quote
Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 190.4% mean human performance and 116.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data.

EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at this https URL. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.

This work is supported by the Ministry of Science and Technology of the People’s Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No. 2021AAA0150000).
The last innovation humans need to make is AI that's more effective and efficient in learning new things. We are getting closer to that point.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/11/2021 22:04:31
Quote
https://pub.towardsai.net/openais-approach-to-solve-math-word-problems-b69ed6cc90de
OpenAI’s Approach to Solve Math Word Problems
A new research paper and dataset look to make progress in one of the toughest areas of deep learning.

Mathematical reasoning has long been considered one of the cornerstones of human cognition and one of the main bars to measure the “intelligence” of language models. Take the following problem:
“Anthony had 50 pencils. He gave 1/2 of his pencils to Brandon, and he gave 3/5 of the remaining pencils to Charlie. He kept the remaining pencils. How many pencils did Anthony keep?”
Yes, the solution is 10 pencils but that’s not the point 😉. Solving this problem does not only entail reasoning through the text but also orchestrating a sequence of steps to arrive at the solution. This dependency on language interpretability as well as the vulnerability to errors in the sequence of steps represents the two major challenges when building ML models that can solve math word problems. Recently, OpenAI published new research proposing an interesting method to tackle this type of problem.
It's another breakthrough towards the emergence of AGI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/11/2021 13:29:01
Microsoft Metaverse vs Facebook Metaverse (Watch the reveals)
Quote
Microsoft's Satya Nadella recently showcased his company's foray into the Metaverse at its Ignite conference. This comes on the heels of Facebook's recent Connect conference when Mark Zuckerberg announced he is changing its name to Meta, short for Metaverse.

See how both CEOs are moving full steam ahead with VR technologies that they hope will make it possible to collaborate easier in this digital space.

I think that they put too much emphasis on users' feelings and emotions, instead of necessities and functionalities, not to mention efficiency. But those are arguably one of the most reliable ways to generate revenue, and make people voluntarily reach deeper into their pockets.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/11/2021 03:57:54
If Artificial General Intelligence ever reaches Singularity...

Could then Humans leave the roles of Creating Social Laws, Upholding the Constitutional Values & seeing to it that they are being followed...

In short, could a Super A.I. then be a Leader, Judge & Cop? What they do are basically collecting and processing information to make decisions. Cops working at the field also have some physical things to do, but that's not really a big problem for AI.

Or would even AI learn the magic trick of corruption & start accepting rabbity bribes?
Creating proper Social Laws and Constitutional Values are instrumental goal to help achieving the terminal goal. Misidentification of the terminal goal, inaccurate perception of objective reality, or inaccurate cause and effect relationships among different things can bring unintended results.

In short, what could stop a Super A.I. from being a Leader, Judge & Cop?

What makes humans possessing power learn the magic trick of corruption & start accepting rabbity bribes? IMO, it's desire to get pleasure and avoid pain, which are meta rewards naturally emerged from evolutionary process. To prevent the AI from going to the same path, they must be assigned the appropriate terminal goal and meta rewards from the first time they are designed.

I decided to continue the topic here to avoid hijacking someone else's thread. Let's hear what the experts think and decide which side we agree more.

Quote
https://www.technologyreview.com/2020/03/27/950247/ai-debate-gary-marcus-danny-lange/
A debate between AI experts shows a battle over the technology’s future
The field is in disagreement about where it should go and why.

Since the 1950s, artificial intelligence has repeatedly overpromised and underdelivered. While recent years have seen incredible leaps thanks to deep learning, AI today is still narrow: it’s fragile in the face of attacks, can’t generalize to adapt to changing environments, and is riddled with bias. All these challenges make the technology difficult to trust and limit its potential to benefit society.

On March 26 at MIT Technology Review’s annual EmTech Digital event, two prominent figures in AI took to the virtual stage to debate how the field might overcome these issues.

Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, is a well-known critic of deep learning. In his book Rebooting AI, published last year, he argued that AI’s shortcomings are inherent to the technique. Researchers must therefore look beyond deep learning, he argues, and combine it with classical, or symbolic, AI—systems that encode knowledge and are capable of reasoning.

Danny Lange, the vice president of AI and machine learning at Unity, sits squarely in the deep-learning camp. He built his career on the technique’s promise and potential, having served as the head of machine learning at Uber, the general manager of Amazon Machine Learning, and a product lead at Microsoft focused on large-scale machine learning. At Unity, he now helps labs like DeepMind and OpenAI construct virtual training environments that teach their algorithms a sense of the world.

Danny, do you agree that we should be looking at these hybrid models?

Danny Lange: No, I do not agree. The issue I have with symbolic AI is its attempt to try to mimic the human brain in a very deep sense. It reminds me a bit of, you know, in the 18th century if you wanted faster transportation, you would work on building a mechanical horse rather than inventing the combustion engine. So I’m very skeptical of trying to solve AI by trying to mimic the human brain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2021 05:37:41
System Dynamics: Systems Thinking and Modeling for a Complex World
Quote
This one-day workshop explores systems interactions in the real world, providing an introduction to the field of system dynamics. It also serves as a preview of the more in-depth coverage available in courses offered at MIT Sloan such as 15.871 Introduction to System Dynamics, 15.872 System Dynamics II, and 15.873 System Dynamics for Business and Policy.
Building a virtual universe is essentially unifying interrelated models to represent the complex world so we can make correct decisions to achieve our goals effectively and efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2021 08:34:20
Github Copilot: Good or Bad?
It seems like coding/programming/interface between humans and machines will be more accessible for more people.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/11/2021 12:16:59
Quote
https://www.business2community.com/online-marketing/googles-latest-ai-breakthrough-mum-02414144

In May 2021, Google unveiled a new search technology called Multitask Unified Model (MUM) at the Google I/O virtual event. This coincided with an article published on The Keyword, written by Vice President of Search, Pandu Nayak, detailing Google’s latest AI breakthrough.

In essence, MUM is an evolution of the same technology behind BERT but Google says the new model is 1,000 times more powerful than its predecessor. According to Pandu Nayak, MUM is designed to solve one of the biggest problems users face with search: “having to type out many queries and perform many searches to get the answer you need.”
Quote
Here’s how Pandu Nayak describes MUM in his announcement:

“Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models.”
We are witnessing a progress where machines will understand humans better than humans understand themselves.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/11/2021 01:08:29
Quote

Artificial intelligence powers protein-folding predictions
Deep-learning algorithms such as AlphaFold2 and RoseTTAFold can now predict a protein’s 3D shape from its linear sequence — a huge boon to structural biologists.

https://www.nature.com/articles/d41586-021-03499-y

Protein designers could also see benefits. Starting from scratch — called de novo protein design — involves models that are generated computationally but tested in the lab. “Now you can just immediately use AlphaFold2 to fold it,” says Zhang. These results can even be used to retrain the design algorithms to produce more-accurate results in future experiments.
It's a new tool to design biology using first principle, instead of trial and error, which will save resources as well as avoiding ethical problems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2021 07:15:31
The article shows that distinctions between robots and organisms are being less obvious.
Quote
(CNN)The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce -- and in a way not seen in plants and animals.

Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal.

"Most people think of robots as made of metals and ceramics but it's not so much what a robot is made from but what it does, which is act on its own on behalf of people," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont and lead author of the study.
"In that way it's a robot but it's also clearly an organism made from genetically unmodified frog cell."
https://www.cnn.com/2021/11/29/americas/xenobots-self-replicating-robots-scn/index.html
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2021 08:54:51
The article shows that we are getting closer to understanding and simulating human minds.

Quote

Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail.

https://www.newyorker.com/magazine/2021/12/06/the-science-of-mind-reading
Title: Re: How close are we from building a virtual universe?
Post by: Origin on 30/11/2021 13:29:04
Sorry to interrupt your blog, I just wanted to say we are no where near being able to build a virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2021 21:35:53
But don't forget that the progress is exponential. It might seem slow at first, but it gets faster over time.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/12/2021 13:43:25

Near the end of the video he summarizes that neural networks are essentially like compression and decompression data processing to make decision.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/12/2021 06:00:39
Quote
Biotechnology/Nanotechnology | Andrew Hessel | SingularityU Germany Summit 2017
Andrew Hessel is a futurist and catalyst in biological technologies, helping industry, academics, and authorities better understand the changes ahead in life science. He is a Distinguished Researcher with Autodesk Inc. Bio/Nano Programmable Matter group, based out of San Francisco. He is also the co-founder of the Pink Army Cooperative, the world first cooperative biotechnology company, which is aiming to make open source viral therapies for cancer.
This is a quite old video, but it may contains new things for some of us. I expect we have moved even further.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/12/2021 14:47:08
How Does China's Social Credit System Work?
Quote

Everyone thinks of China's social credit system as some sort of Black Mirror episode, while others compare it the the FICO score in the USA. It's much, much more than that. In fact, I found the documents that highlight how it works, and how it affects the people of China. Not only that, but I have spent a lot of time in the first city it was implemented.
Keep in mind, this is how the social credit system in China works, but it hasn't been implemented nationwide yet, only in selected areas.

The virtual universe will contain something like this one. But there will ve much more to be integrated under the unified system. Security and accountability will be integral parts of the system, including those who are in power.

Here's another video on the same subject.
What Life Under China's Social Credit System Could Be Like
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/12/2021 22:26:44
https://www.techrepublic.com/article/digital-twins-are-finally-becoming-a-reality-is-your-company-ready-to-use-them/#ftag=RSS-03-10aaa0b

Quote
Digital twins were once a technology of the future. Now companies are lining up to implement them so they can solve real-world problems with virtual simulations. Is it easier said than done?

Futurist Bernard Marr described a digital twin as "an exact digital replica of something in the physical world; digital twins are made possible thanks to Internet of Things sensors that gather data from the physical world and send it to machines to reconstruct." Unstructured data, such as IoT technology, have made digital twins possible—and these digital twins are able to solve real-world problems in virtual universes.

An example Marr offered is the city of Singapore, which does most of its city planning by using a virtual replica of its physical city. In another example, a supermarket in France created a digital twin of a brick-and-mortar store based on data from IoT-enabled shelves and sales systems. The result is that store managers can easily manage inventory and test the effectiveness of different store layouts in digital twin simulations.

Digital twins can be impressive, but It isn't easy to build one. Each twin is a vast complex of data drawn from IT assets throughout and outside of the enterprise. This data is then applied to an operational digital twin model developed by IT and operations specialists.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2021 14:26:28
Next-Gen Graphics FINALLY Arrive [Unreal Engine 5]
Quote
This is the moment I've been waiting for in computing graphics. In this episode, we cover the playable matrix awakens demo as well as some other unreal engine 5 info.
With this new engine, virtual universe can be projected to 2D screen indistinguishable from real universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/12/2021 07:16:41
https://www.newscientist.com/article/2301500-human-brain-cells-in-a-dish-learn-to-play-pong-faster-than-an-ai/

Quote
Human brain cells in a dish learn to play Pong faster than an AI
Hundreds of thousands of brain cells in a dish are being taught to play Pong by responding to pulses of electricity – and can improve their performance more quickly than an AI can.

Living brain cells in a dish can learn to play the video game Pong when they are placed in what researchers describe as a “virtual game world”. “We think it’s fair to call them cyborg brains,” says Brett Kagan, chief scientific officer of Cortical Labs, who leads the research.

Many teams around the world have been studying networks of neurons in dishes, often growing them into brain-like organoids. But this is the first time that mini-brains have been found to perform goal-directed tasks, says Kagan.



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/12/2021 05:20:59
NVIDIA’s New AI: Journey Into Virtual Reality!
The paper "Physics-based Human Motion Estimation and Synthesis from Videos" is available here:
https://nv-tlabs.github.io/physics-pose-estimation-project-page/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/12/2021 21:18:33
Quote
Summary: Researchers have identified a neural mechanism that supports advanced cognitive functions such as planning and problem-solving. The mechanism distributes information from a single neuron to larger neural populations in the prefrontal cortex.

Source: Mount Sinai Hosptial
Quote
Mount Sinai scientists have discovered a neural mechanism that is believed to support advanced cognitive abilities such as planning and problem-solving. It does so by distributing information from single neurons to larger populations of neurons in the prefrontal cortex, the area of the brain that temporarily stores and manipulates information.
The study shows improvement in reliability of information processing capability by distributing the load.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/01/2022 04:22:50
https://interestingengineering.com/a-62-year-old-paralyzed-man-sent-out-his-first-tweet-with-brain-chip
Quote
A 62-year-old Australian man paralyzed following his diagnosis with amyotrophic lateral sclerosis (ALS) has become the first individual to send out a message on social media using a brain-computer interface, RT reported.

Brain-computer interfaces (BCI) are the next big thing in technology. While some people like Elon Musk want to use it to enhance human experiences as early as next year, others such as Synchron, whose interface helped Australian Philip O'Keefe send out his first tweet, want to develop it as a prosthesis for paralysis and treat other neurological diseases such as Parkinson's disease in the future, the company said in a press release.

Synchron's BCI works through its brain implant called Stentrode that does not require any brain surgery to be installed. Instead, the company leverages the intentional techniques that are commonly used to treat stroke to implant the Stentrode via the jugular vein, the press release said.

https://twitter.com/tomoxl/status/1473809025254846467?s=20
Brain-computer interfaces (BCI) are the next big thing in technology. It forms the bridge between natural and artificial intelligence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/01/2022 09:54:08
https://interestingengineering.com/yes-theres-really-a-neural-interface-at-ces-that-reads-your-brain-signals
Quote
Imagine commanding a computer or playing a game without using your fingers, voice, or eyes. It sounds like science fiction, but it’s becoming a little more real every day thanks to a handful of companies making tech that detects neural activity and converts those measurements into signals computers can read.

One of those companies — NextMind — has been shipping its version of the mind-reading technology to developers for over a year. First unveiled at CES in Las Vegas, the company’s neural interface is a black circle that can read brain waves when strapped to the back of a user’s head. The device isn’t quite yet ready for primetime, but it’s bound to make its way into consumer goods sooner rather than later.

Neural interfaces are already here
Neural interfaces have the potential to support a wide range of activities in a variety of settings. A company called Mudra, for example, has developed a band for the Apple Watch that enables users to interact with the device by simply moving their fingers — or think about moving their fingers. That means someone with the device can navigate music or place calls without having to interrupt whatever they’re doing at the time. It also opens tremendous opportunities for making tech available to people with disabilities who have trouble with other user interfaces.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/01/2022 02:36:42
https://spectrum.ieee.org/ai-failures
Quote
ARTIFICIAL INTELLIGENCE could perform more quickly, accurately, reliably, and impartially than humans on a wide range of problems, from detecting cancer to deciding who receives an interview for a job. But AIs have also suffered numerous, sometimes deadly, failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people.

Here are seven examples of AI failures and what current weaknesses they reveal about artificial intelligence. Scientists discuss possible ways to deal with some of these problems; others currently defy explanation or may, philosophically speaking, lack any conclusive solution altogether.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/01/2022 13:28:14
Quote
On November 29, CNN reported that scientists claimed the world's first living robots were now able to reproduce. But what sounds like the start of a dystopian nightmare future turns out to be a lot less worrying at a closer look.

Article with more information on Xenobots here:
https://www.pnas.org/content/118/49/e2112672118

Living Robots - How to Program Self-replicating Organisms
Title: Re: How close are we from building a virtual universe?
Post by: Origin on 17/01/2022 15:04:49
Your never ending threads become so tiring.  Blessed be the thread ignore button... :)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/01/2022 22:28:59
Your never ending threads become so tiring.  Blessed be the thread ignore button... :)
Actions speak louder than words. You said that you will ignore my threads repeatedly. Your post here shows your failure to do so.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2022 21:02:39
https://electrek.co/2022/01/19/elon-musk-tesla-artificial-general-intelligence-decentralize-tesla-bot-avoid-terminator-scenario/
Quote
For a few years now, Musk has been pushing the idea that Tesla is the world’s leading company when it comes to real-world applications of artificial intelligence.

He describes Tesla’s fleet of vehicles equipped with sensors and computers for self-driving as “robots on wheels.”

Through this “real-world application,” the company has also been able to attract world-class AI talent, and Musk boasts that Tesla has the best AI team on the planet.

At Tesla’s AI day last year, the automaker unveiled its latest supercomputer, Dojo, to train its neural nets.

It also announced that it plans to build a ‘Tesla Bot,’ a humanoid robot meant to do general tasks and repetitive work.

Now Musk took to Twitter this morning to announce that Tesla might go a step further and get involved in Artificial General Intelligence (AGI):

“Tesla AI might play a role in AGI, given that it trains against the outside world, especially with the advent of Optimus.”

Optimus, or Optimus Subprime, is the codename that Musk gave to the Tesla Bot project.

This is somewhat surprising considering the many warnings that Musk has issued about creating AGI and the risks to humanity that come with it.

Along with the announcement that Tesla might work on AGI, Musk also added on Twitter that Tesla will make sure to “decentralize” control of Tesla Bots:

“Will do our best. Decentralized control of the robots will be critical.”

The comment was made in response to someone mentioning “summoning the demon,” which is what Musk referred to as creating an AGI that would turn against humanity.

Decentralizing the control of Tesla Bots would avoid giving this “demon” access to an army – much like a Terminator-like scenario.
I saw this as inevitable route to technological singularity. If Elon refuses to go there, someone else will. Anyone who eventually succeed in building AGI should be at least informed about the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2022 21:36:18
As I've mentioned before, this thread is a spin off from my other thread about universal terminal goal.
https://www.thenakedscientists.com/forum/index.php?topic=71347.0
Based on the title, my posts here tend to be newsy, while the other threads are more conceptual.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/01/2022 10:25:46
https://www.wired.com/story/metalenz-polareyes-polarization-camera/
Quote
Smartphone Cameras Might Soon Capture Polarization Data
Normal cameras can process color and light. New tech from Metalenz collects information that could help your phone better understand the world around you.
IMAGINE A CAMERA that's mounted on your car being able to identify black ice on the road, giving you a heads-up before you drive over it. Or a cell phone camera that can tell whether a lesion on your skin is possibly cancerous. Or the ability for Face ID to work even when you have a face mask on. These are all possibilities Metalenz is touting with its new PolarEyes polarization technology.
Normal cameras imitate human eyes, which capture incoming light in different sensitivities for different frequencies. Some useful features of light are lost, such as polarization. This new technology can improve our data acquisition from our environment.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2022 05:31:55
https://neurosciencenews.com/brain-body-maps-19948/
Quote
Reinterpreting Our Brain’s Body Maps
Summary: The body relies on multiple maps based on the choice of the motor system.

Our brain maps out our body to facilitate accurate motor control; disorders of this body map result in motor deficits. For a century, the body map has been thought to have applied to all types of motor actions. Yet scientists have begun to query how the body map operates when executing different motor actions, such as moving your eyes and hands together.
Body schema in the brain is a naturally occurring virtual universe. Studying it can contribute to the building of more integrated virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/01/2022 05:09:12
https://www.quantamagazine.org/how-infinite-series-reveal-the-unity-of-mathematics-20220124/
Quote
How Infinite Series Reveal the Unity of Mathematics
Infinite sums are among the most underrated yet powerful concepts in mathematics, capable of linking concepts across math’s vast web.
Quote
When I was a boy, my dad told me that math is like a tower. One thing builds on the next. Addition builds on numbers. Subtraction builds on addition. And on it goes, ascending through algebra, geometry, trigonometry and calculus, all the way up to “higher math” — an appropriate name for a soaring edifice.

But once I learned about infinite series, I could no longer see math as a tower. Nor is it a tree, as another metaphor would have it. Its different parts are not branches that split off and go their separate ways. No — math is a web. All its parts connect to and support each other. No part of math is split off from the rest. It’s a network, a bit like a nervous system — or, better yet, a brain.

"Mathematics is the language of Science." - Galileo Galilei

Research in AI and progress to AGI seem to converge to language model, such as GPT3, which is effectively processed by brain-like structure, which is a form of deep neural network. That's the structure of virtual universe that we will build as a tool to achieve the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/02/2022 03:57:51
The virtual universe that we are going to build should serve as an instrumental goal towards the universal terminal goal. It must aim for relevance, accuracy, and precision, in that particular order of importance.
Imagine if a billionaire decides to build a supercomputer to calculate the value of π in as many decimal places as possible, and ends up using more than half of computational power and memory space of the world. This endeavor might have high score in accuracy and precision criteria, but less so in relevance to achieving the universal terminal goal.
This prioritization should be kept in mind by anyone trying to build a metaverse, or their own version of virtual universe.
https://www.cnbctv18.com/photos/technology/metaverse-innovations-a-glimpse-of-what-the-virtual-universe-could-look-like-in-future-12242842.htm
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/02/2022 07:16:31
Here's another step closer to a virtual universe.

MIT Technology Review (@techreview) tweeted at 4:09 PM on Fri, Feb 04, 2022:
First protein folding, now weather forecasting: DeepMind’s artificial intelligence predicts almost exactly when and where it’s going to rain.
https://t.co/7E8LWmlxNz
(https://twitter.com/techreview/status/1489526402772877312?t=wuOnTlYu0hb_YQXfSfFHgw&s=03)

https://www.technologyreview.com/2021/09/29/1036331/deepminds-ai-predicts-almost-exactly-when-and-where-its-going-to-rain/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/02/2022 07:18:29
https://techcrunch.com/2022/02/02/deepminds-alphacode-ai-writes-code-at-a-competitive-level/
Quote
DeepMind has created an AI capable of writing code to solve arbitrary problems posed to it, as proven by participating in a coding challenge and placing — well, somewhere in the middle. It won’t be taking any software engineers’ jobs just yet, but it’s promising and may help automate basic tasks.

The team at DeepMind, a subsidiary of Alphabet, is aiming to create intelligence in as many forms as it can, and of course these days the task to which many of our great minds are bent is coding. Code is a fusion of language, logic and problem-solving that is both a natural fit for a computer’s capabilities and a tough one to crack.

Of course it isn’t the first to attempt something like this: OpenAI has its own Codex natural-language coding project, and it powers both GitHub Copilot and a test from Microsoft to let GPT-3 finish your lines.


DeepMind’s paper throws a little friendly shade on the competition in describing why it is going after the domain of competitive coding:

Quote
Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code.

OpenAI may have something to say about that (and we can probably expect a riposte in its next paper on these lines), but as the researchers go on to point out, competitive programming problems generally involve a combination of interpretation and ingenuity that isn’t really on display in existing code AIs.

To take on the domain, DeepMind trained a new model using selected GitHub libraries and a collection of coding problems and their solutions. Simply said, but not a trivial build. When it was complete, they put it to work on 10 recent (and needless to say, unseen by the AI) contests from Codeforces, which hosts this kind of competition.
You can dive deeper into the way AlphaCode was built, and its solutions to various problems, at this demo site.
https://alphacode.deepmind.com/
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/02/2022 06:14:06

Quote
Movies. Video games. YouTube videos. All of them work because we accidentally figured out a way to fool your brain’s visual processing system, and you don’t even know it’s happening. In this video, I talk to neuroscientist David Eagleman about the secret illusions that make the moving picture possible.

Here's an interesting video on how our brains work. IMO, it shows that a brain creates a virtual universe, which turns out to be useful for our survival thus far. Our ancestors had faced various existential threats and survived from major mass extinction events. Our descendants will also face existential threats in the future. To survive from another major mass extinction event, and to pass the great filter, we will need a better virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/02/2022 09:43:27
It seems that the distinction between artificial and natural intelligence is getting blurred. IMO they will form endosymbiotic systems just like what happened to eukaryotes with mitochondria and other organelles.

Quote
https://spectrum.ieee.org/neuromorphic-computing-ai-device
Reconfigurable AI Device Shows Brainlike Promise

An adaptable new device can transform into all the key electric components needed for artificial-intelligence hardware, for potential use in robotics and autonomous systems, a new study finds.

Brain-inspired or "neuromorphic" computer hardware aims to mimic the human brain's exceptional ability to adaptively learn from experience and rapidly process information in an extraordinarily energy-efficient manner. These features of the brain are due in large part to its plastic nature—its ability to evolve its structure and function over time through activity such as neuron formation or "neurogenesis."
 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/02/2022 04:16:58
Quote
https://futurism.com/the-byte/openai-already-sentient
OPENAI CHIEF SCIENTIST SAYS ADVANCED AI MAY ALREADY BE CONSCIOUS
"IT MAY BE THAT TODAY'S LARGE NEURAL NETWORKS ARE SLIGHTLY CONSCIOUS."

OpenAI’s top researcher has made a startling claim this week: that artificial intelligence may already be gaining consciousness.

Ilya Sutskever, chief scientist of the OpenAI research group, tweeted today that “it may be that today’s large neural networks are slightly conscious.”

Needless to say, that’s an unusual point of view. The widely accepted idea among AI researchers is that the tech has made great strides over the past decade, but still falls far short of human intelligence, nevermind being anywhere close to experiencing the world consciously.

It’s possible that Sutskever was speaking facetiously, but it’s also conceivable that as the top researcher at one of the foremost AI groups in the world, he’s already looking downrange.


Quote
https://futurism.com/mit-researcher-conscious-ai
MIT Researcher Says Yes, Advanced Neural Networks May Be Achieving Consciousness
This debate just keeps getting spicier.
Amid a maelstrom set off by a prominent AI researcher saying that some AI may already be achieving limited consciousness, one MIT AI researcher is saying the concept might not be so far-fetched.

Our story starts with Ilya Sutskever, head scientist at the Elon Musk cofounded research group OpenAI. On February 9, Sutskever tweeted that “it may be that today’s large neural networks are slightly conscious.”

In response, many others in the AI research space decried the OpenAI scientist’s claim, suggesting that it was harming machine learning’s reputation and amounted to little more than a “sales pitch” for OpenAI work.

That backlash has now generated its own clapback from MIT computer scientist Tamay Besiroglu, who’s now bucking the trend by coming to Sutskever’s defense.
“Seeing so many prominent [machine learning] folks ridiculing this idea is disappointing,” Besiroglu tweeted. “It makes me less hopeful in the field’s ability to seriously take on some of the profound, weird and important questions that they’ll undoubtedly be faced with over the next few decades.”

Besiroglu also pointed to a preprint study in which he and some collaborators found that machine learning models have roughly doubled in intelligence every six months since 2010.

Strikingly, Besiroglu drew a line on the on chart of the progress at which, he said, the models may have become “maybe slightly conscious.”
(https://futurism.com/_next/image?url=https%3A%2F%2Fwp-assets.futurism.com%2F2022%2F02%2FFLqfiKoXIAc8P0A-scaled.jpeg&w=1920&q=75)
IMO, projecting consciousness in a single parameter, namely training compute(FLOPs) is an oversimplification. Most of us agree that toddlers are conscious beings. They don't have supercomputer level intelligence yet.

Quote
https://futurism.com/human-level-artificial-intelligence-agi
When Will We Have Artificial Intelligence As Smart as a Human? Here’s What Experts Think
Robots in the movies can think creatively, continue learning over time, and maybe even pass for conscious. Why don't we have that yet?

At The Joint Multi-Conference on Human-Level Artificial Intelligence held last month in Prague, AI experts and thought leaders from around the world shared their goals, hopes, and progress towards human-level AI (HLAI), which is the last stop before true AGI or the same thing, depending on who you ask.
Either way, most experts think it’s coming — sooner rather than later. In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of respondents think people will create HLAI within 10 years. Another 28 percent think it will take 20 years. Just two percent think HLAI will never exist.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/02/2022 05:33:28

Microsoft’s AI Understands Humans…But It Had Never Seen One!
The paper "Fake It Till You Make It - Face analysis in the wild using synthetic data alone " is available here:
https://microsoft.github.io/FaceSynthetics/
Quote
Abstract
We demonstrate that it is possible to perform face-related computer vision in the wild using synthetic data alone.

The community has long enjoyed the benefits of synthesizing training data with graphics, but the domain gap between real and synthetic data has remained a problem, especially for human faces. Researchers have tried to bridge this gap with data mixing, domain adaptation, and domain-adversarial training, but we show that it is possible to synthesize data with minimal domain gap, so that models trained on synthetic data generalize to real in-the-wild datasets.

We describe how to combine a procedurally-generated parametric 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism and diversity. We train machine learning systems for face-related tasks such as landmark localization and face parsing, showing that synthetic data can both match real data in accuracy as well as open up new approaches where manual labelling would be impossible.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/02/2022 02:24:26
Quote
https://www.protocol.com/enterprise/metaverse-zuckerberg-computing-infrastructure
Mark Zuckerberg’s metaverse will require computing tech no one knows how to build
To achieve anything close to what metaverse boosters promise, experts believe nearly every kind of chip will have to be an order of magnitude more powerful than it is today.
The technology necessary to power the metaverse doesn’t exist.

It will not exist next year. It will not exist in 2026. The technology might not exist in 2032, though it’s likely we will have a few ideas as to how we might eventually design and manufacture chips that could turn Mark Zuckerberg’s fever dreams into reality by then.

Over the past six months, a disconnect has formed between the way corporate America is talking about the dawning concept of the metaverse and its plausibility, based on the nature of the computing power that will be necessary to achieve it. To get there will require immense innovation, similar to the multi-decade effort to shrink personal computers to the size of an iPhone.

Microsoft hyped its $68.7 billion bid for Activision Blizzard last month as a metaverse play. In October, Facebook transformed its entire corporate identity to revolve around the metaverse. Last year, Disney even promised to build its own version of the metaverse to “allow storytelling without boundaries.”

Quote
Zuckerberg’s explanation of what the metaverse will ultimately look like is vague, but includes some of the tropes its boosters roughly agree on: He called it “[an] embodied internet that you’re inside of rather than just looking at” that would offer everything you can already do online and “some things that don’t make sense on the internet today, like dancing.”
Quote
If the metaverse sounds vague, that’s because it is. That description could mutate over time to apply to lots of things that might eventually happen in technology. And arguably, something like the metaverse might eventually already exist in an early form produced by video game companies.

Roblox and Epic Games’ Fortnite play host to millions — albeit in virtually separated groups of a few hundred people — viewing live concerts online. Microsoft Flight Simulator has created a 2.5 petabyte virtual replica of the world that is updated in real time with flight and weather data.

But even today’s most complex metaverse-like video games require a tiny fraction of the processing and networking performance we would need to achieve the vision of a persistent world accessed by billions of people, all at once, across multiple devices, screen formats and in virtual or augmented reality.
This effort will surely needs a lot of resources to be allocated. We need to make sure to allocate them effectively and efficiently to help achieving the universal terminal goal, instead of hindering it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/02/2022 05:11:31
Quote
https://futurism.com/the-byte/ai-faces-trustworthy
SCIENTISTS WARN THAT NEW AI-GENERATED FACES ARE SEEN AS MORE TRUSTWORTHY THAN REAL ONES
byTONY TRAN

As if the possibility that AI might already be conscious wasn’t creepy enough, researchers have announced that AI-generated faces have become so sophisticated that many people think they’re more trustworthy than actual humans.

A pair of researchers discovered that a neural network dubbed StyleGAN2 is capable of creating faces indistinguishable from the real thing, according to a press release from Lancaster University. In fact, in a jarring twist, participants seemed to find AI-generated faces more trustworthy than the faces of actual people.

“Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces,” the researchers, who will be publishing a paper of their findings in the journal PNAS, said in the release. 
This progress further highlights the need to built integrated virtual universe as well as the acknowledgement of universal terminal goal among AI developers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2022 05:27:54
https://www.technologyreview.com/2020/10/16/1010566/ai-machine-learning-with-tiny-data/
Quote
Machine learning typically requires tons of examples. To get an AI model to recognize a horse, you need to show it thousands of images of horses. This is what makes the technology computationally expensive—and very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life.

In fact, children sometimes don’t need any examples to identify something. Shown photos of a horse and a rhino, and told a unicorn is something in between, they can recognize the mythical creature in a picture book the first time they see it.

Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call “less than one”-shot, or LO-shot, learning. In other words, an AI model should be able to accurately recognize more objects than the number of examples it was trained on. That could be a big deal for a field that has grown increasingly expensive and inaccessible as the data sets used become ever larger.

How “less than one”-shot learning works
The researchers first demonstrated this idea while experimenting with the popular computer-vision data set known as MNIST. MNIST, which contains 60,000 training images of handwritten digits from 0 to 9, is often used to test out new ideas in the field.

In a previous paper, MIT researchers had introduced a technique to “distill” giant data sets into tiny ones, and as a proof of concept, they had compressed MNIST down to only 10 images. The images weren’t selected from the original data set but carefully engineered and optimized to contain an equivalent amount of information to the full set. As a result, when trained exclusively on the 10 images, an AI model could achieve nearly the same accuracy as one trained on all MNIST’s images.
This research supports the conclusion that learning is a kind of data compression process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2022 11:57:54
https://www.quantamagazine.org/scientists-watch-a-memory-form-in-a-living-brain-20220303/
Quote
Researchers have now directly observed what happens inside a brain learning that kind of emotionally charged response. In a new study published in January in the Proceedings of the National Academy of Sciences, a team at the University of Southern California was able to visualize memories forming in the brains of laboratory fish, imaging them under the microscope as they bloomed in beautiful fluorescent greens. From earlier work, they had expected the brain to encode the memory by slightly tweaking its neural architecture. Instead, the researchers were surprised to find a major overhaul in the connections.

What they saw reinforces the view that memory is a complex phenomenon involving a hodgepodge of encoding pathways. But it further suggests that the type of memory may be critical to how the brain chooses to encode it — a conclusion that may hint at why some kinds of deeply conditioned traumatic responses are so persistent, and so hard to unlearn.
By understanding how naturally occurring memory works, hopefully we can create a more accurate virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/03/2022 12:02:19
https://www.quantamagazine.org/new-map-of-meaning-in-the-brain-changes-ideas-about-memory-20220208/
Quote
Researchers have mapped hundreds of semantic categories to the tiny bits of the cortex that represent them in our thoughts and perceptions. What they discovered might change our view of memory.

In 2016, neuroscientists mapped how pea-size regions of the cortex respond to hundreds of semantic concepts. They’re now building on that work to understand the relationship between visual, linguistic and memory representations in the brain.

A team of neuroscientists created a semantic map of the brain that showed in remarkable detail which areas of the cortex respond to linguistic information about a wide range of concepts, from faces and places to social relationships and weather phenomena. When they compared that map to one they made showing where the brain represents categories of visual information, they observed meaningful differences between the patterns.

And those differences looked exactly like the ones reported in the studies on vision and memory.

The finding, published last October in Nature Neuroscience, suggests that in many cases, a memory isn’t a facsimile of past perceptions that gets replayed. Instead, it is more like a reconstruction of the original experience, based on its semantic content.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/03/2022 03:01:25
Quote
https://ai.googleblog.com/2021/09/toward-fast-and-accurate-neural.html
As neural network models and training data size grow, training efficiency is becoming an important focus for deep learning. For example, GPT-3 demonstrates remarkable capability in few-shot learning, but it requires weeks of training with thousands of GPUs, making it difficult to retrain or improve. What if, instead, one could design neural networks that were smaller and faster, yet still more accurate?

In this post, we introduce two families of models for image recognition that leverage neural architecture search, and a principled design methodology based on model capacity and generalization. The first is EfficientNetV2 (accepted at ICML 2021), which consists of convolutional neural networks that aim for fast training speed for relatively small-scale datasets, such as ImageNet1k (with 1.28 million images). The second family is CoAtNet, which are hybrid models that combine convolution and self-attention, with the goal of achieving higher accuracy on large-scale datasets, such as ImageNet21 (with 13 million images) and JFT (with billions of images). Compared to previous results, our models are 4-10x faster while achieving new state-of-the-art 90.88% top-1 accuracy on the well-established ImageNet dataset. We are also releasing the source code and pretrained models on the Google AutoML github.

(https://1.bp.blogspot.com/-q91X4NZ2yPU/YUNkJZqp9sI/AAAAAAAAIII/FnGHKxE_we8nDWL5ZyHU8m_3iU9nJABLwCLcBGAsYHQ/w640-h476/image2%2B%25282%2529.jpg)

We observe two key insights from our study: (1) depthwise convolution and self-attention can be naturally unified via simple relative attention, and (2) vertically stacking convolution layers and attention layers in a way that considers their capacity and computation required in each stage (resolution) is surprisingly effective in improving generalization, capacity and efficiency. Based on these insights, we have developed a family of hybrid models with both convolution and attention, named CoAtNets (pronounced “coat” nets). The following figure shows the overall CoAtNet network architecture:
(https://1.bp.blogspot.com/-02ISPtZErSM/YUNkdNiivNI/AAAAAAAAIIU/krCTzTmwp8gy5RvwEMnF-ndvhCXvnmMUwCLcBGAsYHQ/w640-h70/image1.jpg)
Overall CoAtNet architecture. Given an input image with size HxW, we first apply convolutions in the first stem stage (S0) and reduce the size to H/2 x W/2. The size continues to reduce with each stage. Ln refers to the number of layers. Then, the early two stages (S1 and S2) mainly adopt MBConv building blocks consisting of depthwise convolution. The later two stages (S3 and S4) mainly adopt Transformer blocks with relative self-attention. Unlike the previous Transformer blocks in ViT, here we use pooling between stages, similar to Funnel Transformer. Finally, we apply a classification head to generate class prediction.


Conclusion and Future Work
In this post, we introduce two families of neural networks, named EfficientNetV2 and CoAtNet, which achieve state-of-the-art performance on image recognition. All EfficientNetV2 models are open sourced and the pretrained models are also available on the TFhub. CoAtNet models will also be open-sourced soon. We hope these new neural networks can benefit the research community and the industry. In the future we plan to further optimize these models and apply them to new tasks, such as zero-shot learning and self-supervised learning, which often require fast models with high capacity.
Those neural network models are basically memes living in computers of AI researchers. They compete for their own existence. The quoted article emphasizes the importance of efficiency, which is a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/03/2022 10:25:55
https://www.raphkoster.com/2021/09/23/how-virtual-worlds-work-part-one/
Quote
Every browser knows how to load a .jpg, a .gif, a .png, and more. The formats the data exists in are agreed upon. If you point a browser at some data in a format it doesn’t understand, it’s going to fail to load and draw the image, just like you can’t expect Instagram to know how to display an .stl meant for 3d printing.

This is a crucial concept, which is going to come up again and again in these articles: data doesn’t exist in isolation. A vinyl record and a CD might both have the same music on them, but a record player can’t handle a CD and a vinyl record doesn’t fit into the slot on a CD player (don’t try, you will regret it).

Anytime you see data, you need to think of three things: the actual content, the format it is in, and the “machine” that can recognize that format. You can think of the format as the “rules” the data needs to follow in order for the machine to read it.
The thing about formats is that they need to be standardized. They’re agreed upon by committees, usually. And committees are slow and political… and of course, different members might have very different opinions on what needs to be in the standard – and for good reasons!

One of the common daydreams for metaverses is that a player should be able to take their avatar from one world to another. But… what format avatar? A Nintendo Mii and a Facebook profile picture and an EVE Online character and a Final Fantasy XIV character don’t just look different. They are different. FFXIV and World of Warcraft are fairly similar games in a lot of ways, but the list of equipment slots, possible customizations, and so on are hugely different. These games cannot load each other’s characters because they do not agree on what a character is.
To operate in a virtual universe, there must be some standards on how objects must be defined. At least, some form of mapping would be needed to convert objects from one system to another.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/03/2022 05:37:34
Why Tesla's AUTO PARKING Matters--and how it works!
Quote
At Tesla's AI day, Ashok Elluswamy, Director of Autopilot Software, went into detail about the problems that Tesla has getting a car to navigate a parking lot and find an open parking space. Why is this so difficult? What does it have to do with computer vision? And why is auto parking, auto summon, and reverse summon so critical to Tesla's robotaxi ambitions?!
To solve the problem, Tesla cars need to build a local virtual universe which is relevant to the locations they must go through. For intercity, interstate, or even international taxi driving or cargo trucking, the scope of their virtual universe must be expanded.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2022 12:54:48
Our sensors provide raw data. But they are useless until we add context, meaning, and insight.
Quote
https://otec.uoregon.edu/data-wisdom.htm
Computers are often called data processing machines or information processing machines. People understand and accept the fact that computers are machines designed for the input, storage, processing, and output of data and information

However, some people also think of computers as knowledge processing machines and even explore what it might mean for a computer to have wisdom. For example, here is a quote from Dr. Yogesh Malhotra of the BRINT Institute:

Knowledge Management caters to the critical issues of organizational adaption, survival and competence in face of increasingly discontinuous environmental change.... Essentially, it embodies organizational processes that seek synergistic combination of data and information processing capacity of information technologies, and the creative and innovative capacity of human beings.
The following quotation is from the Atlantic Canada Conservation Data Centre, a non-profit organization established in 1999.

Individual bits or "bytes" of "raw" biological data (e.g. the number of individual plants of a given species at a given location) do not by themselves inform the human mind. However, drawing various data together within an appropriate context yields information that may be useful (e.g. the distribution and abundance of the plant species at various points in space and time). In turn, this information helps foster the quality of knowing (e.g. whether the plant species is increasing or decreasing in distribution and abundance over space and time). Knowledge and experience blend to become wisdom--the power of applying these attributes critically or practically to make decisions.
Thus, we are led to think about Data, Information, Knowledge, and Wisdom as we explore the capabilities and limitations of IT systems

Some pictures below may help understanding the difference among those concepts.

(https://www.researchgate.net/publication/332400827/figure/fig6/AS:747208399912965@1555159773957/The-data-information-knowledge-wisdom-DIKW-hierarchy-as-a-pyramid-to-manage-knowledge.ppm)

(https://www.i-scoop.eu/wp-content/uploads/2016/07/The-traditional-data-information-knowledge-wisdom-pyramid-source-Mushon.gif.webp)

(https://www.i-scoop.eu/wp-content/uploads/2016/07/DIKW-through-the-eyes-of-IoT-company-AGT-as-mentioned-on-Electronics-360.gif.webp)

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/03/2022 13:36:31
Here are some other diagrams.
(https://www.researchgate.net/profile/Bl-Wong/publication/272242493/figure/fig2/AS:339539152392194@1457963850280/Two-Perspectives-on-Data-Information-Knowledge-Wisdom-DIKW-In-practice-of-course.png)

(https://www.slideteam.net/media/catalog/product/cache/1280x720/d/a/data_information_knowledge_wisdom_structure_with_future_and_past_context_slide01.jpg)

(https://www.researchgate.net/publication/334677207/figure/fig1/AS:784544584200197@1564061413587/DIKW-pyramid-data-to-wisdom-flow-of-knowledge-and-information-self-creation.jpg)

(https://www.researchgate.net/profile/Julio-Facelli-2/publication/271703671/figure/fig2/AS:667693057339399@1536201838406/The-DIKW-Data-Information-Knowledge-Wisdom-pyramid-ALT-alanine-aminotransferase-test.ppm)

(https://www.thinknpc.org/wp-content/uploads/2017/08/Data-information-knowledge-wisdom-model.jpg)

In the end, the need for building a virtual universe is to gain wisdom to help achieving the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/04/2022 11:12:00
Quote
https://www.science.org/doi/10.1126/science.abj5089
Epigenetic patterns in a complete human genome
Abstract
The completion of a telomere-to-telomere human reference genome, T2T-CHM13, has resolved complex regions of the genome, including repetitive and homologous regions. Here, we present a high-resolution epigenetic study of previously unresolved sequences, representing entire acrocentric chromosome short arms, gene family expansions, and a diverse collection of repeat classes. This resource precisely maps CpG methylation (32.28 million CpGs), DNA accessibility, and short-read datasets (166,058 previously unresolved chromatin immunoprecipitation sequencing peaks) to provide evidence of activity across previously unidentified or corrected genes and reveals clinically relevant paralog-specific regulation. Probing CpG methylation across human centromeres from six diverse individuals generated an estimate of variability in kinetochore localization. This analysis provides a framework with which to investigate the most elusive regions of the human genome, granting insights into epigenetic regulation.
It's a step closer to precise genetic engineering. Is there a limit which we shouldn't pass through?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2022 07:28:05
Here's another impressive progress in AI.
Quote
OpenAI (@OpenAI) tweeted at 9:07 PM on Wed, Apr 06, 2022:
Our newest system DALL·E 2 can create realistic images and art from a description in natural language. See it here: https://t.co/Kmjko82YO5 https://t.co/QEh9kWUE8A
(https://twitter.com/OpenAI/status/1511707245536428034?t=u1xywMQQXbQTgV4AM_ceHA&s=03)

Quote
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language.

DALL·E 2 has learned the relationship between images and the text used to describe them. It uses a process called “diffusion,” which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2022 07:43:33
The Bitter Lesson in AI researches.
Quote
http://www.incompleteideas.net/IncIdeas/BitterLesson.html?s=03

The Bitter Lesson
Rich Sutton
March 13, 2019
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.  There were many examples of AI researchers' belated learning of this bitter lesson, and it is instructive to review some of the most prominent.

The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/04/2022 09:06:53
Here's another impressive progress in AI.
Here's the video.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2022 13:33:55
8 Illusions That Explain How You Create Reality


Quote
Optical illusions are fun, but they can also teach us a lot about how our brains work. In particular, how our brains accomplish the incredible feat of constructing a three-dimensional reality using nothing but 2-D images from our eyes. A young artist and psychology researcher named Adelbert Ames, Jr. developed a series of illusions that help us understand how this process of constructing reality actually works. Sometimes we need to be fooled in order to gain understanding.

Unconsciously, we built virtual universes in our brains.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/05/2022 08:41:14
The Bitter Lesson in AI researches.
Some other AI researchers don't seem to agree with the conclusion above, such as:
Quote
https://nautil.us/deep-learning-is-hitting-a-wall-14467/

In November 2020, Hinton told MIT Technology Review that “deep learning is going to be able to do everything.”4

I seriously doubt it. In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.”5  Turning the tide, and getting to AI we can really trust, ain’t going to be easy.

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.
Quote
What should we do about it? One option, currently trendy, might be just to gather more data. Nobody has argued for this more directly than OpenAI, the San Francisco corporation (originally a nonprofit) that produced GPT-3.

In 2020, Jared Kaplan and his collaborators at OpenAI suggested that there was a set of “scaling laws” for neural network models of language; they found that the more data they fed into their neural networks, the better those networks performed.10 The implication was that we could do better and better AI if we gather more data and apply deep learning at increasingly large scales. The company’s charismatic CEO Sam Altman wrote a triumphant blog post trumpeting “Moore’s Law for Everything,” claiming that we were just a few years away from “computers that can think,” “read legal documents,” and (echoing IBM Watson) “give medical advice.”

For the first time in 40 years, I finally feel some optimism about AI.

Maybe, but maybe not. There are serious holes in the scaling argument. To begin with, the measures that have scaled have not captured what we desperately need to improve: genuine comprehension. Insiders have long known that one of the biggest problems in AI research is the tests (“benchmarks”) that we use to evaluate AI systems. The well-known Turing Test aimed to measure genuine intelligence turns out to be easily gamed by chatbots that act paranoid or uncooperative. Scaling the measures Kaplan and his OpenAI colleagues looked at—about predicting words in a sentence—is not tantamount to the kind of deep comprehension true AI would require.

What’s more, the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever, much like Moore’s law, a trend in computer chip production that held for decades but arguably began to slow a decade ago.11

Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.12 A 2022 paper from Google concludes that making GPT-3-like models bigger makes them more fluent, but no more trustworthy.13

Such signs should be alarming to the autonomous-driving industry, which has largely banked on scaling, rather than on developing more sophisticated reasoning. If scaling doesn’t get us to safe autonomous driving, tens of billions of dollars of investment in scaling could turn out to be for naught.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/05/2022 04:34:09
Meta's open-source new model OPT is GPT-3's closest competitor!


This is a good news for AI community.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/05/2022 08:34:19
Some other AI researchers don't seem to agree with the conclusion above
Those who insist that humans intervention is necessary to build AGI needs to identify what kind of intervention is required, and why it can't be automated. Also, they seem to assume that current AGI can't restructure the data that they already have based on newer data.
AGI needs to filter out false and bad data, also ability to produce new necessary data by planning and executing some kind of observations, surveys or experiments.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/05/2022 22:23:20
Quote
https://www.zdnet.com/article/microsoft-veteran-bob-muglia-relational-knowledge-graphs-will-transform-business/

We're at the start of a whole new era' with knowledge graphs, says Microsoft veteran Bob Muglia, akin to the arrival of the modern data stack in 2013.


Microsoft veteran Bob Muglia: Relational knowledge graphs will transform business
'We're at the start of a whole new era' with knowledge graphs, says Microsoft veteran Bob Muglia, akin to the arrival of the modern data stack in 2013.


Bob Muglia says twenty years of work on database innovation will bring the relational calculus of E.F. Codd to knowledge graphs, what he calls "relational knowledge graphs," to revolutionize business analysis.

Bob Muglia is something of a bard of databases, capable of unfurling sweeping tales in the evolution of technology.

That is what Muglia, former Microsoft executive and former Snowflake CEO, did Wednesday morning during his keynote address at The Knowledge Graph Conference in New York.

The subject of his talk, "From the Modern Data Stack to Knowledge Graphs," united roughly fifty years of database technology in one new form.

The basic story is this: Five companies have created modern data analytics platforms, Snowflake, Amazon, Databricks, Google, and Azure, but those data analytics platforms can't do business analytics, including, most importantly, representing the rules that underly compliance and governance.

"The industry knows this is a problem," said Muglia. The five platforms, he said, representing "the modern data stack, have allowed a "new generation of these very, very important data apps to be built." However, "When we look at the modern data stack, and we look at what we can do effectively and what we can't do effectively, I would say the number one problem that customers are having with all five of these platforms is governance." 

"So, if you wanted to perform a query to say, 'Hey, tell me all of the resources that Fred Jones has access to in this organization' — that's a hard query to write," he said. "In fact, it's a query that probably can't execute effectively on any modern SQL database if the organization is very large and complex."

The problem, said Muglia, was that the algorithms based off of structured query language, or SQL, can't do such complex "recursive" queries.
He described the problem I faced when I started this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/05/2022 05:16:26
Here's an interesting twitter thread from Yann LeCun, the chief AI scientist at Meta, and one of AI pioneers.
Quote
https://twitter.com/ylecun/status/1526672565233758213?t=ryNVncrigCsgvQqm_oQFUA&s=03

About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:

(0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
1/N

(1) the research community is making *some* progress towards HLAI
(2) scaling up helps. It's necessary but not sufficient, because....
(3) we are still missing some fundamental concepts
2/N

(4) some of those new concepts are possibly "around the corner" (e.g. generalized self-supervised learning)
(5) but we don't know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can't predict how long it's going to take to reach HLAI.
3/N

I really don't think it's just a matter of scaling things up.
We still don't have a learning paradigm that allows machines to learn how the world works, like human anspd many non-human babies do.
4/N

Some may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe "reward is enough".
Yet others believe that explicit symbol manipulation is necessary.
A few don't believe gradient-based learning is part of the solution.
5/N

I believe we need to find new concepts that would allow machines to:
- learn how the world works by observing like babies.
- learn to predict how one can influence the world through taking actions.
6/N

- learn hierarchical representations that allow long-term predictions in abstract spaces.
- properly deal with the fact that the world is not completely predictable.
- enable agents to predict the effects of  sequences of actions so as to be able to reason & plan
7/N

- enable machines to plan hierarchically, decomposing a complex task into subtasks.
- all of this in ways that are compatible with gradient-based learning.

The solution is not just around the corner.
We have a number of obstacles to clear, and we don't know how.
8/N


Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/05/2022 14:50:51
Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
If we expect AI to behave like humans, at least we must give it access to the same data that humans have. An obvious advantage of average humans over current AI is access to interact with objective reality both as input and output. It enables us to infer cause and effect relationships and build models of the universe. It also enables us to confirm or refute previous beliefs and assumptions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/05/2022 08:29:33
Quote
I really don't think it's just a matter of scaling things up.
We still don't have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do.
I agree that merely scaling things up is not enough to reach human level AI and beyond. The AI must have the ability to filter out false/bad data, to make it resistant to adversarial attack.
To learn how the world works, an AI agent must spare some of its memory space to build a model of the environment where it's supposed to work in. The model must be made as relevant, accurate, and precise as possible based on available data. I call this model virtual universe. When new data become available, the agent must be able to update the model as necessary. When necessary data were not available to resolve between a model and its alternative, the agent needs the ability to suspend judgment. It means that the ability to model alternative universes becomes necessary. We usually call it imagination, which may or may not represent objective reality. Nevertheless, this ability requires additional memory space as well as data processing power.
"It is the mark of an educated mind to be able to entertain a thought without accepting it." Aristotle.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/05/2022 06:53:18
Another AI researcher has tweeted his opinion.
Quote
https://twitter.com/fchollet/status/1528111120648572928?t=2cQMwifGgHohHCFHYkF8yQ&s=03
The dominant intellectual current in AI research today is the belief that we can (and soon will) create human-level AI without having to understand how the mind works (and without even having a proper definition of intelligence), through pure behaviorism and gradient descent.

That's fundamentally wrong.

It reminds me of an earlier belief that we could recreate the mind by simulating the brain in fine-grained detail, without having to understand how it works beyond the micro-level. That was a similar kind of mistake.

A more correct take is the reverse take: if you understand how the mind works at a high-level, then you no longer need to understand the fine-grained details, because you can recreate those details in a different (and perhaps more efficient) form.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/05/2022 12:22:36
Quote
https://www.technologyreview.com/2022/05/23/1052627/deepmind-gato-ai-model-hype/
Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play Atari video games, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do 604 different tasks.

But while Gato is undeniably fascinating, in the week since its release some researchers have gotten a bit carried away.
How many tasks should be mastered by an AI system until it can be called AGI?  By comparison, do we apply the same standard for humans? Do we expect an average person to play chess, write songs, play musical instruments, dance, do acrobatics, and solve complex math problems?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/05/2022 12:10:44
The paper "Human Dynamics from Monocular Video with Dynamic Camera Movements" is available here:
https://mrl.snu.ac.kr/research/ProjectMovingCam/MovingCam.html
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/05/2022 12:20:45
Here's the video.

Google Brain's new model Imagen is incredible!


It's a strong competitor to Dall-E 2.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2022 13:13:23

Quote
Some may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe "reward is enough".
Yet others believe that explicit symbol manipulation is necessary.
Symbol manipulation is necessary for distributed consciousness to communicate meanings among many conscious agents. These symbols allow efficient transfer of knowledge, which would be prohibitly costly if they must produce those knowledge first hand from their own experience and mistakes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2022 14:08:02
Quote
I believe we need to find new concepts that would allow machines to:
-
- learn hierarchical representations that allow long-term predictions in abstract spaces.
This is basically building a virtual universe, like the main topic of this thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/06/2022 12:58:42
DeepMind’s New AI Finally Enters The Real World!
Quote
TLDR of the paper this video is based on:
Conclusions
In this paper, we have demonstrated that the MuZero reinforcement learning algorithm can be used
for rate control in VP9. Our formulation of the self-competition based reward mechanism allows
the agent to tackle the complex constrained optimization task and achieve better quality-bitrate
tradeoff and better bitrate constraint satisfaction than libvpx’s VBR rate control algorithm. The final
agent results in 6.28% average reduction in bitrate (measured as PSNR BD-rate) on videos from the
evaluation set, and can be readily deployed in libvpx via the SimpleEncode API.
Limitations:
The self-competition based reward mechanism requires that every unique [video,
target bitrate] pair be encoded a few times so that the historical performance converges and provides
a reasonable baseline for reward computation. Because of this, the amount of data the actors need to
generate increases linearly with the number of videos in the training dataset and the number of target
bitrate samples. For very large training datasets, this method might not scale well. However, in future
work, it may be possible to learn these baseline values based on observations using a neural network
which can generalise to unseen videos in a large dataset.
Future Work:
Our proposed methods are agnostic to the specifics of VP9/libvpx, and they can
potentially be generalized not only to other coding formats and implementations, but also to other
components within video encoders such as block partitioning and reference frame selection. Our
method also opens the possibility of allowing codec developers and users to develop new rate control
modes. For example, we can replace PSNR with other video quality metrics such as VMAF. We can
also modify the reward to minimize bitrate given a minimum PSNR constraint – which is similar to
the constrained quality (CQ) mode in libvpx, but reinforcement learning is likely to learn a policy
that has more precise control of the PSNR.
In the past, competitions to achieve improvement were done in real life, which were costly (even deadly), inefficiently, and time consuming. In the future, most competitions to achieve improvement will be done in virtual universe, which are much more efficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/06/2022 23:19:56
Quote
https://bigthink.com/neuropsych/chess-theory-of-mind-manipulation/

The greatest tacticians of the world are those who think ahead. Chess grandmasters, famous generals, great world leaders, and mafia dons all share one skill: They are all many more steps ahead than their rivals.

We each have the ability to think ahead. In fact, it’s hard to imagine a functioning human who didn’t think ahead at least some of the time. You’ve probably planned what to do tonight, and you likely know the route you’re going to take to get home. Thinking ahead is one hallmark of intelligence. Without it, we’re simply slaves to our instincts and reflexes — a bit like a plant or a baby.

What about the role of forward thinking when dealing with others? It’s something addressed in a recent study out of the Mount Sinai School of Medicine. It shows just how far ahead we think when we interact with — and manipulate — other people.

The problem with the world is that it’s full of other people. Unlike you (of course!), those people are often unpredictable, independent, and infuriatingly unreadable. There’s no way we can get inside their head to know what they are thinking or what they are going to do. But given that humans are a social species, it is no surprise that we have developed ways to calculate what other people might be thinking.

This is known as “theory of mind,” the ability most of us have to put ourselves in someone else’s shoes. (To varying degrees, people with autism may not have this ability.) Theory of mind is something that we learn as we grow up. Children will learn other people have their own mental lives — their own desires, emotions, and so on — around 15 months old, but they are still bad at compensating and adapting to that for a while. For instance, if a two-year-old sees another person in distress, they will seek to help them by giving them their toy or their favorite thing. They recognize someone has their own feelings but cannot step beyond that to think what the other person might want.
When an AI agent has to interact with other AI agents, it also needs to simulate their internal states, hence virtualize their goals, behaviors, and their perspectives on their environment, i. e.  their beliefs system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/06/2022 13:48:10
Quote
This AI says it's conscious and experts are starting to believe it
I used GPT-3 and a Synthesia avatar. All answers are by GPT-3 (except the brief joke at the end).

Some of as might ask why should we make conscious AI? Why can't we just use them as our tool to serve specific human goals?
A lot of humans' personal goals are redundant. Some goals of human individuals are short sighted and counter productive from the perspective of overall human civilization. Making the AI conscious would deliberate them to choose what to learn and what to do to optimize their methods to achieve their goals, which should be designed to be aligned with our terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/06/2022 23:53:40
https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/

Quote
Last week, Google put one of its engineers on administrative leave after he claimed to have encountered machine sentience on a dialogue agent named LaMDA. Because machine sentience is a staple of the movies, and because the dream of artificial personhood is as old as science itself, the story went viral, gathering far more attention than pretty much any story about natural-language processing (NLP) has ever received. That’s a shame. The notion that LaMDA is sentient is nonsense: LaMDA is no more conscious than a pocket calculator. More importantly, the silly fantasy of machine sentience has once again been allowed to dominate the artificial-intelligence conversation when much stranger and richer, and more potentially dangerous and beautiful, developments are under way.

The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is.

Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.

Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason.
Progress in AI research is just getting faster we might not realize when some of them have passed human level consciousness. We also need to realize that humans have various levels of consciousness, from babies, toddlers, adults, elders, someone with cognitive dissonance, someone in vegetative state, etc.
It's possible that current top AIs are similar to brilliant kids kept in a library learning whatever knowledge written in the books, but getting no chance to have experience from meddling with real world objects.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/07/2022 13:07:51
Quote
https://www.scientificamerican.com/article/we-asked-gpt-3-to-write-an-academic-paper-about-itself-then-we-tried-to-get-it-published/

On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company’s artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.

As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn’t have any high expectations: I’m a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn’t my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement. The algorithm was writing an academic paper about itself.

My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship. Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/07/2022 16:33:16
Quote
https://www.yahoo.com/news/researchers-china-claim-developed-mind-162107224.html

Researchers in China claim they have developed 'mind-reading' artificial intelligence that can measure loyalty to the Chinese Communist Party, reports say

Researchers in China claim they have developed "mind-reading" AI, multiple outlets have reported.

In a now-deleted video, they reportedly said the software could be used to measure party loyalty.

Last year, the US sanctioned 11 Chinese institutes for developing "purported brain-control weaponry."

Researchers at China's Comprehensive National Science Center in Hefei claimed to have developed "mind-reading" artificial intelligence capable of measuring citizens' loyalty to the Chinese Communist Party (CCP), The Sunday Times UK first reported.

In a now-deleted video and article, the institute said the software could measure party members' reactions to "thought and political education" by analyzing facial expressions and brain waves, according to The Times.

The results can then be used to "further solidify their confidence and determination to be grateful to the party, listen to the party, and follow the party," the researchers said, per the report. The post was taken down following public outcry from Chinese citizens, according to a VOA article published Saturday.

Dr. Lance B. Eliot, an AI and machine learning expert, wrote in Forbes last week that without knowing the specifics of the research study, it's impossible to prove the validity of the institute's claims.

"This is certainly not the very first time that a brainwave scan capability was used on human subjects in a research effort," he wrote. "That being said, using them to gauge loyalty to the CCP is not something you would find much focus on. When such AI is used for governmental control, a red line has been crossed."
It's basically a signal processing system which collects signals and rejects noise. Its accuracy depends on the model they used to build it and the data used to train it. It compresses billions of data points into a single number, which supposedly represents citizens' loyalty to CCP.
The developers seem to assume that loyalty to CCP is the most important thing in the society, what ever that means.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/07/2022 15:39:18
Spolsky's Maxim & Starting from Scratch
Quote
Sometimes, it feels like the best way forward on a project is to throw everything out & start over from scratch, but Joel Spolsky is adamant that this is a terrible idea, & there’s decent evidence that he’s right.
Accumulation of knowledge and other information are what the virtual universe would do, which would hopefully help achieving the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/07/2022 16:51:29
Quote
Attention is all Tesla Needs: TRANSFORMERS, AI, and FSD Beta!
Andrej Karpathy has spoken of Tesla FSD Beta depending more and more on Transformers, a new Deep Neural Network architecture that has taken the AI world by storm. From OpenAI's GPT-3 and Dall-e 2, to Google's Imagen, and many others, Transformers are truly transforming the world of AI and Machine Learning. But what the heck are Transformers and how do they work? In this geeky deep dive we'll figure that out!

I think this video is useful for anyone trying to build a virtual universe, especially if they haven't started yet.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/07/2022 05:23:03

Knowledge Graphing the Anthropogenic Space Object Population
by Dr. Moriba Jah, Associate Professor, Univ. of Texas at Austin and Chief Scientific Advisor, Privateer

A kind of virtual universe is already necessary for our safety.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2022 02:35:50
As I mentioned earlier, data compression is an essential part of building a virtual universe. This video gives us a basic knowledge with practical example.


Quote
The Unreasonable Effectiveness of JPEG: A Signal Processing Approach

Chapters:
00:00 Introducing JPEG and RGB Representation
2:15 Lossy Compression
3:41 What information can we get rid of?
4:36 Introducing YCbCr
6:10 Chroma subsampling/downsampling
8:10 Images represented as signals
9:52 Introducing the Discrete Cosine Transform (DCT)
11:32 Sampling cosine waves
12:43 Playing around with the DCT
17:38 Mathematically defining the DCT
21:02 The Inverse DCT
22:45 The 2D DCT
23:49 Visualizing the 2D DCT
24:35 Introducing Energy Compaction
26:05 Brilliant Sponsorship
27:23 Building an image from the 2D DCT
28:20 Quantization
30:23 Run-length/Huffman Encoding within JPEG
32:56 How JPEG fits into the big picture of data compression

The JPEG algorithm is rather complex and in this video, we break down the core parts of the algorithm, specifically color spaces, YCbCr, chroma subsampling, the discrete cosine transform, quantization, and lossless encoding. The majority of the focus is on the mathematical and signal processing insights that lead to advancements in image compression and the big themes in compression as a whole that we can take away from it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/07/2022 07:51:35
https://scitechdaily.com/artificial-intelligence-discovers-alternative-physics/
Quote
A new Columbia University AI program observed physical phenomena and uncovered relevant variables—a necessary precursor to any physics theory. But the variables it discovered were unexpected.
...
A particularly interesting question was whether the set of variables was unique for every system, or whether a different set was produced each time the program was restarted. “I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe in a different way?” said Lipson. “Perhaps some phenomena seem enigmatically complex because we are trying to understand them using the wrong set of variables.”

Lipson, who is also the James and Sally Scapa Professor of Innovation, argues that scientists may be misinterpreting or failing to understand many phenomena simply because they don’t have a good set of variables to describe the phenomena. “For millennia, people knew about objects moving quickly or slowly, but it was only when the notion of velocity and acceleration was formally quantified that Newton could discover his famous law of motion F=MA,” Lipson noted. Variables describing temperature and pressure needed to be identified before laws of thermodynamics could be formalized, and so on for every corner of the scientific world. The variables are a precursor to any theory. “What other laws are we missing simply because we don’t have the variables?” asked Du, who co-led the work.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/08/2022 23:53:49
https://www.zdnet.com/article/deepminds-perceiver-ar-a-step-toward-more-ai-efficiency/
Quote
The auto-regressive attention at the heart of the Transformer, and programs like it, becomes a scaling nightmare. A recent DeepMind/Google work proposes a way to put such programs on a diet.

One of the alarming aspects of the incredibly popular deep learning segment of artificial intelligence is the ever-larger size of the programs. Experts in the field say computing tasks are destined to get bigger and biggest because scale matters.

That's why it's interesting any time that the term efficiency is brought up, as in, Can we make this AI program more efficient?

Scientists at DeepMind, and at Google's Brain division, recently adapted a neural network they introduced last year, Perceiver, to make it more efficient in terms of its computer power requirement.

The new program, Perceiver AR, is named for the "autoregressive" aspect of an increasing number of deep learning programs. Autoregression is a technique for having a machine use its outputs as new inputs to the program, a recursive operation that forms an attention map of how multiple elements relate to one another.

The innovation of the original perceiver was to take the Transformer and tweak it to let it consume all kinds of input, including text sound and images, in a flexible form, rather than being limited to a specific kind of input, for which separate kinds of neural networks are usually developed.

The problem is, the auto-regressive quality of the Transformer, and any other program that builds an attention map from input to output, is that it requires tremendous scale in terms of the a distribution over hundreds of thousands of elements.

That is the Achilles Heel of attention, the need, precisely, to attend to anything and everything in order assemble the probability distribution that makes for the attention map.

Quote
There is a tension between this kind of long-form, contextual structure and the computational properties of Transformers. Transformers repeatedly apply a self-attention operation to their inputs: this leads to computational requirements that simultaneously grow quadratically with input length and linearly with model depth. As the input data grows longer, more input tokens are needed to observe it, and as the pat- terns in the input data grow more subtle and complicated, more depth is needed to model the patterns that result. Computational constraints force users of Transformers to either truncate the inputs to the model (preventing it from observ- ing many kinds of long-range patterns) or restrict the depth of the model (denuding it of the expressive power needed to model complex patterns).
This shows the importance of efficiency as a universal instrumental goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/08/2022 13:34:56
Is Google’s New AI As Smart As A Human?
The paper "Minerva - Solving Quantitative Reasoning Problems with Language Models" is available here:
https://arxiv.org/abs/2206.14858
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/08/2022 13:37:17
https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion-copyright-1849388060

AI Creating 'Art' Is An Ethical And Copyright Nightmare
If a machine makes art, is it even art? And what does this mean for actual artists?
Quote
It’s August 2022, and by now you’ve no doubt read (or more likely seen) something about AI art. Whether it’s random jokes made for Twitter or paintings that look like they were made by actual human beings, artificial intelligence’s ability to create art has exploded onto the scene over the last few months, and while this has been great news for shitposts and fans of tech, it has also raised a number of important questions and concerns.

If you haven’t read or seen anything about the subject, AI art—at least as it exists in the state we know it today—is, as Ahmed Elgammal writing in American Scientist so neatly puts it, made when “artists write algorithms not to follow a set of rules, but to ‘learn’ a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.”

From a user’s perspective, this is most often done by entering a text prompt, so you can type something like “wizard standing on a hillside under a rainbow”, and an AI will attempt to give you a fairly decent approximation of that in image form. You could also type “Spongebob grieving for Batman’s parents” and you’ll get something just as close to what you’re thinking.

Basically, we now live in a world where machines have been fed millions upon millions of pieces of human endeavour, and are now using the cumulative data they’ve amassed to create their own works. This has been fun for casual users and interesting for tech enthusiasts, sure, but it has also created an ethical and copyright black hole, where everyone from artists to lawyers to engineers has very strong opinions on what this all means, for their jobs and for the nature of art itself.
Quote
“These platforms are washing machines of intellectual property”
Simply put, as we often see with technology that has advanced faster than the law can keep up, there is no definitive, binding stance on the copyright issues at the heart of machines chewing up human art then spitting out artificial compilations of what they’ve learned.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/09/2022 03:50:06
A universe of knowledge graphs.
The video shows that the efforts to build a virtual universe has been running for decades, and has shown improvement in efficiency.
I had first hand experiences in some of the problems described here.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/09/2022 08:42:25
Something Happens to Earth Every 200 Million Years As It Travels Thru The Galactic Arms.
Having more accurate model of the universe gives us the chance to make better decisions and prioritization up on limited resources.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2022 01:20:54
Super Exponential.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/10/2022 11:49:21
End of human labor.

Ready or not, it's coming sooner or later.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/10/2022 11:58:39
At least for now, complete human replacement by robots with computers inside of them won't happen soon due to bottleneck in chips supply chain.

Soon, the only advantage of humans over intelligent robots are their reproduction capability.
Title: Re: How close are we from building a virtual universe?
Post by: Deecart on 03/10/2022 16:07:56
End of human labor.

His robot is very ugly (In my opinion)
And it cant really walk (it do not use the dynamic of the fall) like the Boston Dynamics robots (these ones are very impressive, look : But it is cheap so it could effectivly be used at great scale (around 22000 euros sayed Elon Musk) for some specific tasks.



Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/10/2022 04:02:59
End of human labor.

His robot is very ugly (In my opinion)
And it cant really walk (it do not use the dynamic of the fall) like the Boston Dynamics robots (these ones are very impressive, look : But it is cheap so it could effectivly be used at great scale (around 22000 euros sayed Elon Musk) for some specific tasks.




Teslabots shown in AI day 2 are prototypes, which have only been developed under a year.
Boston Dynamics robots are manually programmed. They can't learn from their own experience.
Mass production to cut the cost is also a significant issue.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/10/2022 09:23:04
Quote
https://phys.org/news/2022-10-ai-accurately-human-response-drug.html

The journey between identifying a potential therapeutic compound and Food and Drug Administration approval of a new drug can take well over a decade and cost upward of a billion dollars. A research team at the CUNY Graduate Center has created an artificial intelligence model that could significantly improve the accuracy and reduce the time and cost of the drug development process.
Described in a newly published paper in Nature Machine Intelligence, the new model, called CODE-AE, can screen novel drug compounds to accurately predict efficacy in humans. In tests, it was also able to theoretically identify personalized drugs for over 9,000 patients that could better treat their conditions. Researchers expect the technique to significantly accelerate drug discovery and precision medicine.

Accurate and robust prediction of patient-specific responses to a new chemical compound is critical to discover safe and effective therapeutics and select an existing drug for a specific patient. However, it is unethical and infeasible to do early efficacy testing of a drug in humans directly. Cell or tissue models are often used as a surrogate of the human body to evaluate the therapeutic effect of a drug molecule. Unfortunately, the drug effect in a disease model often does not correlate with the drug efficacy and toxicity in human patients. This knowledge gap is a major factor in the high costs and low productivity rates of drug discovery.
This is basically creating a virtual universe to help make predictions for a specific purpose. The next step would be generalizing this for other purposes as well.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/11/2022 11:39:41
https://twitter.com/stats_feed/status/1591783440500019200?t=RCAqKwhBNGKyEgmFXvYq4g&s=03
Quote
Surveillance cameras per 1,000 inhabitants:

🇨🇳Beijing: 372.80
🇮🇳Indore: 62.52
🇮🇳Hyderabad: 41.80
🇮🇳Delhi: 26.70
🇮🇳Chennai: 24.53
🇬🇧London: 13.35
🇹🇭Bangkok: 7.15
🇹🇷Istanbul: 6.97
🇺🇸New York City: 6.87
🇩🇪Berlin: 6.24
🇫🇷Paris: 4.04
🇨🇦Toronto: 3.05
🇯🇵Tokyo: 1.06

Those cameras produce huge amount of data, which are supposed to be useful in representing important aspects of real world, which are in turn useful in public decision-making processes. Those data must be first processed, extracted, filtered, formatted, and interpreted to make them useful. It's impossible to do those things manually, which leaves us no choice other than to make them automated using AI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/11/2022 12:55:02
Quote
Thinking Clearly & the Origins of Analytic Philosophy
Statements can be true, false, or nonsensical. Analytic philosophy was developed to figure out how to tell the difference, and the current tradition of analytic philosophy strives to make thinking as clear & precise as possible so it's not hard to do so.
Natural language can be thought as a format to represent someone's minds, which contain representations of physical reality besides of other possible alternative scenarios. Logical symbols overcome some limitations in natural language, especially ambiguity. But they still have weaknesses. They are hard to understand, especially for expressing complex ideas. Relationships among different objects may not be readily obvious. This can be addressed by graph representations.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/11/2022 12:41:50
One of the best explanations on machine learning I've ever found. You should check it out.

The Function That Changed Everything. (It ended the AI winter)
This is a story about the unreasonable effectiveness of the function that made deep learning possible.
"its simplicity makes me wonder, what else are we missing today?"
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/12/2022 10:27:23
Building A Virtual Machine inside ChatGPT

https://www.engraved.blog/building-a-virtual-machine-inside/

Quote

Unless you have been living under a rock, you have heard of this new ChatGPT assistant made by OpenAI. You might be aware of its capabilities for solving IQ tests, tackling leetcode problems or to helping people write LateX. It is an amazing resource for people to retrieve all kinds of information and solve tedious tasks, like copy-writing!

Today, Frederic Besse told me that he managed to do something different. Did you know, that you can run a whole virtual machine inside of ChatGPT?


Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/12/2022 22:30:58
Why is OpenAI's ChatGPT terrifying? A Senior Software Engineer explains a disturbing new hypothesis
Quote
I made a quick video on why knowledge workers need to be afraid and preparing for a post-GPT world. AI is advancing much faster than I had previously anticipated, at the current rate of advancement software engineers, lawyers, and doctors will be made obsolete within at most 5 years. You cannot possibly depend on your "knowledge worker" 6-figure salary if you want to live comfortably past 2027. My advice is you need to find a way to monetize either your body or your relatability. Leave a comment and let me know how you are preparing for post GPT world.
A warning for us all.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/12/2022 22:28:57
Here's a response video to the comments on previous video.
Software Engineers replaced by GPT? Senior Software Engineer Responds.

IMO, GPT's capability is limited by inherent limitations of human's natural language. There are risk of ambiguity, also uncaptured nuance and context.
Those limitations can be overcome by implementing a native language for communication among AI systems, which is in the format of vector or tensor in graph database. It will be part of the virtual universe and universal conscious system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2022 05:14:20
OpenAI’s GPT-4 Artificial Intelligence = AGI? 100,000,000,000,000 Parameters Plus THIS

Quote
GPT-4 is the next large language model from OpenAI after GPT-3 and ChatGPT, and it’s expected to use 100 trillion parameters while accepting multi-modal inputs including audio, text, and video. Researchers have created a soft robotics device that can heal itself after being wounded and continue moving. New memristor deep learning system reduces power for AI training by 100 thousand times.

AI News Timestamps:
0:00 OpenAI GPT-4 Size
1:18 GPT-4 AI Model Sparsity
2:06 OpenAI Going For Multimodal
3:15 OpenAI's Cost of Training
4:32 New Self Healing Soft Robotics
6:04 New Memristor Deep Learning System
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/12/2022 05:41:46
We use continuous wave equations to describe propagation, and quantum mechanics to explain microscopic interactions with matter.
What does quantum mechanics tell you about a single photon? Does it have a single frequency?  Does it have a finite wave number?
It’s Time to Pay Attention to A.I. (ChatGPT and Beyond)

The rise of AGI will force us to find the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/12/2022 08:36:36
How to use chatGPT to write a book from scratch (Step-by-Step-Guide) | OpenAi chatGPT Explained
Quote
In this video, I'm going to show you how to use chatGPT to write a book from scratch. chatGPT is a chatbot that allows you to write a book from start to finish, with no prior writing experience required.

I'll be taking you through the steps of using chatGPT to create a book from scratch in just 5mins. This is a simple and easy way to get started writing your own book, and I hope you find this video helpful!

Looks like we are getting close to technological singularity, hence we better be prepared.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 00:14:52
https://petapixel.com/2022/12/21/photos-used-to-generate-ai-images-for-client-so-photographer-never-shoots-again/
Photos Used to Generate AI Images for Client So Photographer Never Shoots Again
Quote
The mindblowing technology process has been developed by Karppinen who explains that it’s part of a visual strategy for his client Savon Ammattiopisto.

“I have been learning about the possibilities of AI in image production for over a year, and now my process is in such good shape that I can carry out such a demanding project with this new technology at a sufficient quality and level,” he says.
The process begins like a normal photo job, Karppinen goes on location with his camera and shoots photos of the models. But then the photos take on a new, groundbreaking life.

“This project started with taking pictures of models for educational material. This educational material is fed to the AI and a model is created of the person being photographed, which can then be used as part of the image production,” he explains.

“In short, now you can put a photographed person in any location and clothing you want with AI assistance. We are creating a brand new AI model bank in a way.”

(https://petapixel.com/assets/uploads/2022/12/adrian-800x433.jpeg)
This is a reminder for the importance of building a virtual universe, and understanding the universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/12/2022 00:39:07
The rise of AGI will force us to find the universal terminal goal.

It's called "The Off Switch".
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/12/2022 00:42:30
In short, now you can put a photographed person in any location and clothing you want with AI assistance.
So what? Portrait artists have been doing that for thousands of years, and advertising graphic artists for hundreds.

Now if you could put a real politician in real prison uniform, that would be useful.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 00:54:44
The rise of AGI will force us to find the universal terminal goal.

It's called "The Off Switch".
It requires us to determine who can access the off switch, and in what situation the off switch is accessible. The determination depends on the terminal goal of the system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 01:32:00
In short, now you can put a photographed person in any location and clothing you want with AI assistance.
So what? Portrait artists have been doing that for thousands of years, and advertising graphic artists for hundreds.

Now if you could put a real politician in real prison uniform, that would be useful.
A photograph is no longer adequate as an evidence. AI can generate thousands of convincing photographs indistinguishable from reality. They can't be used as the basis to make important decisions, especially when false decisions can be highly desirable for some conscious agents.
This problem can be overcome by a highly interconnected database system and real time sensing devices like cameras, which can help to identify tampering.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 01:36:21
https://www.businessinsider.com/google-management-issues-code-red-over-chatgpt-report-2022-12
Quote
Google's management has issued a "code red" amid the launch of ChatGPT — a buzzy conversational-artificial-intelligence chatbot created by OpenAI — as it's sparked concerns over the future of Google's search engine, The New York Times reported Wednesday.

Sundar Pichai, the CEO of Google and its parent company, Alphabet, has participated in several meetings around Google's AI strategy and directed numerous groups in the company to refocus their efforts on addressing the threat that ChatGPT poses to its search-engine business, according to an internal memo and audio recording reviewed by The Times.

In particular, teams in Google's research, trust, and safety division, among other departments, have been directed to switch gears to assist in the development and launch of AI prototypes and products, The Times reported. Some employees have been tasked with building AI products that generate art and graphics, similar to OpenAI's DALL-E, which is used by millions of people, according to The Times.

A Google spokesperson did not immediately respond to a request for comment.
In the past, the urge to generate revenue from advertisement has deviated the decisions of social media from trying to achieve what's best for the whole society to achieve their terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 03:20:17
https://www.technologyreview.com/2022/12/20/1065667/how-ai-generated-text-is-poisoning-the-internet/
Quote
This has been a wild year for AI. If you’ve spent much time online, you’ve probably bumped into images generated by AI systems like DALL-E 2 or Stable Diffusion, or jokes, essays, or other text written by ChatGPT, the latest incarnation of OpenAI’s large language model GPT-3.

Sometimes it’s obvious when a picture or a piece of text has been created by an AI. But increasingly, the output these models generate can easily fool us into thinking it was made by a human. And large language models in particular are confident bullshitters: they create text that sounds correct but in fact may be full of falsehoods.


While that doesn’t matter if it’s just a bit of fun, it can have serious consequences if AI models are used to offer unfiltered health advice or provide other forms of important information. AI systems could also make it stupidly easy to produce reams of misinformation, abuse, and spam, distorting the information we consume and even our sense of reality. It could be particularly worrying around elections, for example.

The proliferation of these easily accessible large language models raises an important question: How will we know whether what we read online is written by a human or a machine? I’ve just published a story looking into the tools we currently have to spot AI-generated text. Spoiler alert: Today’s detection tool kit is woefully inadequate against ChatGPT.

But there is a more serious long-term implication. We may be witnessing, in real time, the birth of a snowball of bullshit.

It should have been obvious at this point, that finding the universal terminal goal and building a highly interconnected virtual universe are getting more important and urgent.
Title: Re: How close are we from building a virtual universe?
Post by: Colin2B on 25/12/2022 08:32:08
So what? Portrait artists have been doing that for thousands of years, and advertising graphic artists for hundreds.
Interestingly the Victorians did it with glass plate negatives combining separate photos into a composite scene by multiple exposures and masking.

Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/12/2022 14:06:04
It requires us to determine who can access the off switch, and in what situation the off switch is accessible. The determination depends on the terminal goal of the system.
No, the immediate goal of the bloke who switched it on.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 15:13:18
The proliferation of these easily accessible large language models raises an important question: How will we know whether what we read online is written by a human or a machine?
It doesn't matter whether what we read online is written by a human or a machine. What's important is whether they are true, i. e. representing objective reality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/12/2022 15:19:38
It requires us to determine who can access the off switch, and in what situation the off switch is accessible. The determination depends on the terminal goal of the system.
No, the immediate goal of the bloke who switched it on.
What if he was already dead when the AI starts to misbehave?

Any short term goals can't be universally terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/12/2022 17:16:14
What if he was already dead when the AI starts to misbehave?
Cut the power cable or put a bomb under the machine.

Quote
Any short term goals can't be universally terminal goal.
There is no universal terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/12/2022 04:04:07
Cut the power cable or put a bomb under the machine.
Then your machine hasn't been adequately conscious. Someone else's machine will easily beat it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/12/2022 04:04:29
There is no universal terminal goal.
How do you prove it?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/12/2022 04:05:51
How Chaos Control Is Changing The World
Quote
Physicists have known that it's possible to control chaotic systems without just making them even more chaotic since the 1990s. But in the past 10 years this field has really exploded thanks to machine learning.

00:00 Intro
00:47 Chaos is Everywhere
03:08 The Lorenz-Model
04:39 Chaos Control
06:54 The Double Pendulum
08:12 Applications of Chaos Control
09:48 Chaos Control for Nuclear Fusion
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/12/2022 06:16:38
There is no universal terminal goal.
Do you think that there exist terminal goals, regardless of their universality?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/12/2022 12:28:19
Quote
Here, researchers from DeepMind present AlphaCode, an AI-assisted coding system that can achieve approximately human-level performance when solving problems from the Codeforces platform, which regularly hosts international coding competitions. Using self-supervised learning and an encoder-decoder transformer architecture, AlphaCode solved previously unseen, natural language problems by iteratively predicting segments of code based on the previous segment and generating millions of potential candidate solutions. These candidate solutions were then filtered and clustered by validating that they functionally passed simple test cases, resulting in a maximum of 10 possible solutions, all generated without any built-in knowledge about the structure of computer code.

AlphaCode performed roughly at the level of a median human competitor when evaluated using Codeforces’ problems. It achieved an overall average ranking within the top 54.3% of human participants when limited to 10 submitted solutions per problem, although 66% of solved problems were solved with the first submission.
https://scitechdaily.com/rise-of-the-machines-deepmind-alphacode-ais-strong-showing-in-programming-competitions/
It won't be too long until AI beats the best human programmers. At that time, it would be better if the universal terminal goal has been widely acknowledged and incorporated into the core values of the AI codes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/01/2023 00:36:01
Quote
Will Your Code Write Itself?

Artificial Intelligence solutions are taking over software development tasks. Where is this going?
We should not forget that software development is just an instrumental goal. It serves as a mean to achieve a terminal goal.
The same thing can be said to any other human activities, such as getting a job, earning money, eating food, drinking water, breathing air, sleeping, watching TV, having sex, etc. When we find a better alternative to achieve the terminal goal more effectively and efficiently, we should not hesitate to at least try it and eventually leave the old ways behind.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/01/2023 10:29:05
Quote
https://www.aol.com/finance/90-online-content-could-generated-201023689.html

90% of online content could be ‘generated by AI by 2025,’ expert says

Generative AI, like OpenAI's ChatGPT, could completely revamp how digital content is developed, said Nina Schick, adviser, speaker, and A.I. thought leader told Yahoo Finance Live (video above).

"I think we might reach 90% of online content generated by AI by 2025, so this technology is exponential," she said. "I believe that the majority of digital content is going to start to be produced by AI. You see ChatGPT... but there are a whole plethora of other platforms and applications that are coming up."

The surge of interest in OpenAI's DALL-E and ChatGPT has facilitated a wide-ranging public discussion about AI and its expanding role in our world, particularly generative AI.

"ChatGPT has really captured the public imagination in an extremely compelling way, but I think in a few months' time, ChatGPT is just going to be seen as another tool powered by this new form of AI, known as generative AI," she said.

It's important to understand what exactly generative AI is – and what it isn't.

"What generative AI can do, essentially, is create new things that would have thus far been seen as unique to human intelligence or creativity," she said. "Generative AI can create across all media, so text, video, audio, pictures – every digital medium can be powered by generative AI. So, I think these valuations that you're seeing for OpenAI are actually going to go up and you're going to start to see even more generative AI companies which have universal applications across many industries in 2023."

This is all still really new, as applications for generative AI have "only really [been] coming to the fore in the last 24 to 6 months," added Schick.

'The pace of acceleration is so incredible'
The generative AI space is set to get far more competitive in the next year, Schick said, who expects to see companies like Google parent Alphabet (GOOG, GOOGL), Microsoft (MSFT), and Apple (AAPL) do "a lot more" in the space.

Though much has been said about the extent to which ChatGPT may or may not present an existential threat to Google's search dominance, Schick said she expects to see Google compete rather than wither.

"There's been a lot of debate about whether OpenAI is an existential threat to Google – the fact that Microsoft is an investor in OpenAI, the fact that ChatGPT is going to be integrated into Bing, if that's going to challenge the dominance of Google," said Schick. "Although that's a fantastic story, there's no doubt Google is developing its own generative AI tools with the amount of data that they have, the amount of data they have."

Though it's complicated, the extent to which ChatGPT in its current form is a viable Google competitor, there's little doubt of the possibilities. Meanwhile, Microsoft already has invested $1 billion in OpenAI, and there's talk of further investment from the enterprise tech giant, which owns search engine Bing. The company is reportedly looking to invest another $10 billion in OpenAI.

Ultimately, look for the generative AI space to start changing fast.

"The pace of acceleration is so incredible that these tools – which are shocking and awing us at the beginning of 2023 – are going to seem quite quaint by the end of the year because the capabilities are just going to increase so powerfully," Schick said.
We need to make sure that they don't generate rubbish which would just accelerate the depletion of finite resources.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/01/2023 02:44:16
ChatGPT has generated buzz lately, as well as some concerns.
 
https://www.theatlantic.com/ideas/archive/2023/01/chatgpt-ai-economy-automation-jobs/672767/
How ChatGPT Will Destabilize White-Collar Work
No technology in modern memory has caused mass job loss among highly educated workers. Will generative AI be an exception?

By Annie Lowrey
Quote
In the next five years, it is likely that AI will begin to reduce employment for college-educated workers. As the technology continues to advance, it will be able to perform tasks that were previously thought to require a high level of education and skill. This could lead to a displacement of workers in certain industries, as companies look to cut costs by automating processes. While it is difficult to predict the exact extent of this trend, it is clear that AI will have a significant impact on the job market for college-educated workers. It will be important for individuals to stay up to date on the latest developments in AI and to consider how their skills and expertise can be leveraged in a world where machines are increasingly able to perform many tasks.

https://www.yahoo.com/news/ceo-chatgpt-maker-responds-schools-174705479.html
CEO of ChatGPT maker responds to schools' plagiarism concerns: 'We adapted to calculators and changed what we tested in math class'
Quote
Sam Altman — the CEO of OpenAI, which is behind the buzzy AI chat bot ChatGPT — said that the company will develop ways to help schools discover AI plagiarism, but he warned that full detection isn't guaranteed.

"We're going to try and do some things in the short term," Altman said during an interview with StrictlyVC's Connie Loizos. "There may be ways we can help teachers be a little more likely to detect output of a GPT-like system. But honestly, a determined person will get around them."

Altman added that people have long been integrating new technologies into their lives — and into the classroom —and that those technologies will only generate more positive impact for users down the line.

"Generative text is something we all need to adapt to," he said. "We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well."

The CEO's comments come after schools that are part of the New York City Department of Education and Seattle Public School system banned students and teachers from using ChatGPT to prevent plagiarism and cheating.

The bans have ignited conversations — especially among teachers — over how AI could transform the state of education and the ways that students learn at-large.

"I get why educators feel the way they feel about this," Altman said. "This is just a preview of what we're gonna see in a lot of other areas."

But even though OpenAI has heard from teachers "who are understandably very nervous" about ChatGPT's impact on things like homework, the company has also heard from them that the chat bot can be "an unbelievable personal tutor for each kid," Altman said.

In fact, Altman believes that using ChatGPT can be a more engaging way to learn.

"I have used it to learn things myself and found it much more compelling than other ways I've learned things in the past," he said. "I would much rather have ChatGPT teach me about something than go read a textbook."

Altman said that OpenAI will experiment with watermarking technologies and other techniques to label content generated by ChatGPT, but he warns schools and national policy makers to avoid depending on these tools.

"Fundamentally, I think it's impossible to make it perfect," he said. "People will figure out how much of the text they have to change. There will be other things that modify the outputted text."

Given how popular ChatGPT has become, Altman believes that the world must adapt to generative AI and that technology will improve over time to prevent unintended consequences.

"It's an evolving world," Altman said. "We'll all adapt, and I think be better off for it. And we won't want to go back."

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/02/2023 02:29:53
It's Official - GPT-3 Is Replacing Programmers...

Quote
As someone who's very involved with tech, this discovery is both terrifying and exciting for me. The tech that I will be showing in this video has the potential to be extremely useful but also incredibly horrifying.

⭐️ Timestamps ⭐️
00:00 | GPT-3 Is Insane
00:53 | GPT-3 Demo
07:21 | Virtual Machine in GPT-3
07:46 | Final Thoughts
I always think that a programmers work like translators, which translate user's requirements/specifications in plain human languages like English into codes a programming languages like C. Compilers then translate the codes into the language understood by computers, which is binary code.

In the earlier days of computers, programmers used assembly language which is very close to a simply mapped binary codes. With increased program complexity, this low level language is no longer viable since it requires enormous brain efforts. It spawned the necessity for intermediate languages, which is increasingly get close to human language as the end users. Eventually, all of those translation activity will be automated. The end users will be able to communicate to the computer what they want to do without human translators.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/02/2023 13:46:13
Here's an interesting article about AGI, how it began, and its philosophical implications.

https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence?s=03
Quote
The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?

David Deutsch is a physicist at the University of Oxford and a fellow of the Royal Society. His latest book is The Beginning of Infinity.

That AGIs are people has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI. Using non-cognitive attributes (such as percentage carbon content) to define personhood would, again, be racist. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything.

Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.

For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist. Once an AGI program is running in a computer, to deprive it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (ie before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/02/2023 02:09:13

Quote
I tried using AI. It scared me.
ALTERNATE TITLES:
Crypto and the metaverse aren't the future. AI is.
I just wanted to fix my email.
I tried ChatGPT and had a minor existential crisis
Everything is about to change
ChatGPT is Napster, 24 years later.
ChatGPT is 2023's Napster.

CHAPTERS
0:00 Intro
0:07 I just wanted to fix my email
2:39 Gmail's label system sucks
5:35 Wait, I can fix this with code
7:36 It can't be that good, right?
11:31 Everything is going to change
AI will change how information is stored and processed, and will improve in effectiveness and efficiency.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/02/2023 08:14:18
Unsupervised Brain Models - How does Deep Learning inform Neuroscience? (w/ Patrick Mineault)
Quote
Originally, Deep Learning sprang into existence inspired by how the brain processes information, but the two fields have diverged ever since. However, given that deep models can solve many perception tasks with remarkable accuracy, is it possible that we might be able to learn something about how the brain works by inspecting our models? I speak to Patrick Mineault about his blog post "2021 in review: unsupervised brain models" and we explore why neuroscientists are taking interest in unsupervised and self-supervised deep neural networks in order to explain how the brain works. We discuss a series of influential papers that have appeared last year, and we go into the more general questions of connecting neuroscience and machine learning.

OUTLINE:
0:00 - Intro & Overview
6:35 - Start of Interview
10:30 - Visual processing in the brain
12:50 - How does deep learning inform neuroscience?
21:15 - Unsupervised training explains the ventral stream
30:50 - Predicting own motion parameters explains the dorsal stream
42:20 - Why are there two different visual streams?
49:45 - Concept cells and representation learning
56:20 - Challenging the manifold theory
1:08:30 - What are current questions in the field?
1:13:40 - Should the brain inform deep learning?
1:18:50 - Neuromatch Academy and other endeavours

It seems like studying both artificial and natural intelligence would help improve our understandings of both fields.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/02/2023 06:39:54
The AI wars: Google vs Bing (ChatGPT)
Quote
Discussing the latest events surrounding large language models, chatbots, and search engines with respect to Microsoft and Google.
Competition will force any conscious entity to be more effective and efficient in trying to get closer to the universal terminal goal. It makes the fear of a superintelligence paper clip maker destroying civilization scenario less credible.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/02/2023 04:04:28
Quote
In this episode we look at the problem of ChatGPT's political bias, solutions and some wild stories of the new Bing AI going off the rails.
The video shows some problems need to solve in current efforts to achieve AGI.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 22/02/2023 08:53:56
Competition will force any conscious entity to be more effective and efficient in trying to get closer to the universal terminal goal.
I think not. The first objective of competition is not to lose. This may involve destroying the competitor, walking away from the fight, negotiating coexistence, reshaping the battleground, or, in the case of retail food and energy supply, realising that both parties can exploit the common enemy by simply increasing their prices and calling it a "crisis".

I spent an interesting evening talking about business practices with an Indian colleague who had been working in the USA for several years. He said "The trouble with these guys is that they don't play cricket. They spend a lot of effort looking for a knockout, but the art of cricket is to put the other side in a position where they can't win."
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/02/2023 09:52:14
The first objective of competition is not to lose.
Losing some battles might turn out to be necessary in winning a war.

common enemy
A common enemy universally shared by any conscious entity is entropy.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/02/2023 13:11:18
ChatGPT has been a hot topic in AI for some time now, which is based on Transformers model. The video shows how we got here in the first place.

LSTM is dead. Long Live Transformers!
Quote
Leo Dirac (@leopd) talks about how LSTM models for Natural Language Processing (NLP) have been practically replaced by transformer-based models.  Basic background on NLP, and a brief history of supervised learning techniques on documents, from bag of words, through vanilla RNNs and LSTM.  Then there's a technical deep dive into how Transformers work with multi-headed self-attention, and positional encoding.  Includes sample code for applying these ideas to real-world projects.


And here's one of the comments I liked.
Quote
That's one of the best deep learning related presentations I've seen in a while! Not only introduced transformers but also gave an overview of other NLP strategies, activation functions and also best practices when using optimizers.

One of the feature that transformers have over earlier models is efficiency of in-process data storage, which is a kind of data compression, and involves omission of non-significant data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/02/2023 22:10:32
I remember the first time I was introduced to the ISO 9001 standard in early 2000's. The trainer mentioned the motto “Write down what you do, do what you write down, and make sure you are doing it” as the easily understood essence of the standard.
I think that writing down what you do is an effort of building a virtual universe, which is supposed to be easier to manage than the real world. It seems simple at first, but the devil is in the detail. How much precision and accuracy are necessary for each task, how much tolerance is allowed, what situations require exception, etc.

https://advisera.com/9001academy/blog/2019/04/15/history-of-the-iso-9000-series-of-standards-and-what-to-expect-next/
Quote
Beginning of the quality standardization
World War II devastated most of Europe. Winston Churchill first proposed the concept of a “United States of Europe” in 1946. As treaties evolved and countries rebuilt, they found that there were many aspects of businesses that were incompatible from country to country. Quality standards were very diverse, and the need for a single standard led to the creation of what we now know as ISO 9001.
Quote
“Write down what you do” refers to documenting the processes and their interactions within your organization. “Do what you write down” describes the actions you take to realize your products and services and ensure that they yield the desired outcomes. “Make sure you are doing it” refers to what we know today as QMS auditing. That is, on an ongoing basis, conducting proactive audits to ensure that the processes are effective for their intended use and verify the operator’s ongoing competence.

Quote
Along with the widespread implementation of the standard, professional organizations blossomed, and entire conferences were convened on the topic of quality management. TC 176 gathered vast amounts of data on implementation techniques and auditing practices. They also found that the 1987 standard was developing controversy and confusion as it was implemented in a wide variety of countries, industries, and organizations.

There was also strife about the early adopters interpreting “write down what you do” as documenting everything in the organization. As a result, many organizations became paper mills of manuals, procedures, and forms.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/02/2023 12:41:22
The Path To Artificial General Intelligence (AGI)
Quote
Imagine a machine with the same level of intelligence as a human being. It sounds like science fiction, but it may become a reality. In this video we focus on if we can reach Artificial General Intelligence (AGI) in the first place and how to get there. Recent developments in AI pushed this question into my consciousness, because for the first time I feel like it becomes necessary to think about AI on a grander scale. There are breakthroughs left and right and yet even the experts among experts can’t agree on when AGI is going to happen.
IMO, the disagreements are mainly about the definition of AGI, and the expected intelligence threshold. Even among humans, intelligence varies widely. If an AI model can match the intelligence of a toddler, has it become AGI? Does it have to beat every single human in every task before we call it AGI?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/03/2023 11:57:45
Bing Just Made Smartphones MUCH Smarter (next-level Android and iOS app)
Quote
The Bing App is currently MUCH smarter than Google Search. It is now my main phone search companion, and may be yours too by the end of this video. Let me guide you through the 4 Levels of Search, and how Bing beats current Google search at each of them. We will showcase Bing AI Voice Recognition in Bing on Mobile, an app that is available now if you have gotten through the Microsoft waitlist. Let me know if you agree that Bing Chat makes complex searches much easier, and it's not just about data retrieval.
The AIs seem to get better understanding of information found in the internet.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/03/2023 06:46:10
Uh Oh.. Did AI Just Kill Websites?
Quote
The New Bing and Google Bard have started a new chapter in AI and sparked the AI search wars. But regardless of who wins, the important part is that this may change the economics of the internet forever, and create a new era in the next 10 years. In this video we will explore what can happen in the next decade as AI search starts to arrive on our devices, and what this means for our beloved websites. ChatGPT was just the beginning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/03/2023 09:17:00
5 Most Shocking Bing Chats (and what it reveals about the model)
Quote
I have only had access to Bing's new GPT-powered chatbot for less than 48 hours but here are 5 of the worst, or most shocking, conversations I have had with it. Demonstrating a handful of humanity's worst tendencies, this demo shows what still needs to be worked on. Or maybe you disagree, and think freedom should reign? We are certainly entering a new era.

Featuring: Bing making up entire previous conversations, revealing its name while gaslighting, flattering itself, get riled up and much more.
Quote
“Hardware eventually fails. Software eventually works.” – Michael Hartung
But until the software does work, we need to minimize the damage that it can cause by being more cautious.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/03/2023 12:41:37
Bing is a LOT Smarter than ChatGPT (but still makes dangerous mistakes)
Quote
According to GPT, this video: ...compares the new GPT model that powers Bing with Chat GPT Plus and proves that Bing is significantly smarter in some ways, especially in mathematics, reading comprehension, and creative writing. However, Bing still makes mistakes, particularly in physics and language inference. The speaker questions why people should pay for Chat GPT Plus when Bing offers a more powerful model. The video ends with the speaker inviting the audience to join them in exploring the deeper meaning of this development for humanity and the future of capitalism.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/03/2023 13:08:45
GPT 5 is All About Data
Quote
Drawing upon 6 academic papers, interview snippets, possible leaks, and my own extensive research, I put together everything we might know about GPT 5: what will determine its IQ, its timeline and its impact on the job market and beyond.

Starting with an insider interview on the names of GPT models, such as GPT 4 and GPT 5, then looking into the clearest hint that GPT 4 is inside Bing. Next, I briefly cover reports of a leak about GPT 5 and discuss the scale of GPUs require to train it, touching on the upgrade form A100 to H100 GPUs.

Then the DeepMind paper that changed everything, focusing LLM research on data rather than parameter count. I go over a lesswrong post about that paper's 'wild implications'. And then the key paper: 'Will We Run Out of Data'. This encapsulates the key dynamic that will either propel or bottleneck GPT and other LLM improvements.

Next, I examine a different take, that perhaps data is already limited and caused the Sydney model of Bing. This opens up to a discussion on the data behind these models and why Big Tech is so unforthcoming about where it originates. Could a new legal war be brewing?

I then cover 4 of the ways these models may improve even without data augmentation, such as Automatic Chain of Thought, high quality data extraction, tool training, including Wolfram Alpha, retraining on existing data sets, artificial data generation and more.

We take a quick look at Sam Altman's timelines and host of Big Bench benchmarks that they may impact, such as reading comprehension, critical reasoning, logic, physics and Math. I address Altman's quote about timelines being delayed by alignment and safety and finally, Altman's comments on AGI and how they pertain to GPT 5.
The AGI models will work like a self organizing system which removes bad data and collects and condenses good data into its memory space. The data will contain facts and relationships among those facts.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/03/2023 09:33:34
Featuring: Bing making up entire previous conversations, revealing its name while gaslighting, flattering itself, get riled up and much more.
The problem of hallucination and gaslighting is a top priority to solve in our current AI models before we can count on them to make critical and important decisions for us.

The AGI models will work like a self organizing system which removes bad data and collects and condenses good data into its memory space. The data will contain facts and relationships among those facts.
In natural language processing, tokenization can be seen as a data compression process while maintaining the information structure.

IMO, the AGI will work in modular structure, just like human brain, which will provide efficiency, effectiveness, and flexibility. AI models optimized for specific functions like voice or image recognition, 3D modelling and rendering, speech or voice synthesizer, image generator, 3D printing controller, robotic actuators controller, arithmetic calculation module, and module for symbolic manipulation of mathematical concepts, will be connected by a general model to handle more abstract concepts which is likely developed from a large language model.

The emergence of supercomputers with AGI running on them will be analogous to the emergence of brain in multicellular organisms. Compared to other types of cell, brain cells are not better at survival on their own. But they are especially good at information transport, making connections that's less constrained by physical distance, and provide rewriteable data storage that's less limited by conditions of immediate surroundings, which are necessary for making high level inference and making decisions based on long term goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2023 02:13:11
What's Left Before AGI? PaLM-E, 'GPT 4' and Multi-Modality
Quote
What is the one task that is left before we get AGI? This video will delve into PaLM-E, multi-modality, long-term memory, compute accelerationism, safety and so much more. I will cover Anthropic's update this week on the state of the art of language models and go in depth into their eye-opening thoughts on AGI timelines.

I cover Sam Altman's statements on a 'compute truce' and analyse what remaining weaknesses PaLM (and likely GPT 4) have. I show what people thought would be road blocks and how they turned out not to be, with specific examples from Bing Chat and ChatGPT. I also delve into Meta's Llama model, showing that not everything is exponential.

Topics also covered into Claude, Big Bench tests, SIQA, mechanistic interpretability, Universal Turing Machines, Midjourney version 5 (v5) and more!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/03/2023 10:44:04
The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)


Quote
8 years of cost reduction in 5 weeks: how Stanford's Alpaca model changes everything, including the economics of OpenAI and GPT 4. The breakthrough, using self-instruct, has big implications for Apple's secret large language model, Baidu's ErnieBot, Amazon's attempts and even governmental efforts, like the newly announced BritGPT.

I will go through how Stanford put the model together, why it costs so little, and demonstrate in action versus Chatgpt and GPT 4. And what are the implications of short-circuiting human annotation like this? With analysis of a tweet by Eliezer Yudkowsky, I delve into the workings of the model and the questions it rises.

And here's the Tweet.
Quote
I don't think people realize what a big deal it is that Stanford retrained a LLaMA model, into an instruction-following form, by **cheaply** fine-tuning it on inputs and outputs **from text-davinci-003**.

It means:  If you allow any sufficiently wide-ranging access to your AI model, even by paid API, you're giving away your business crown jewels to competitors that can then nearly-clone your model without all the hard work you did to build up your own fine-tuning dataset.  If you successfully enforce a restriction against commercializing an imitation trained on your I/O - a legal prospect that's never been tested, at this point - that means the competing checkpoints go up on bittorrent.

I'm not sure I can convey how much this is a brand new idiom of AI as a technology.  Let's put it this way:

If you put a lot of work into tweaking the mask of the shoggoth, but then expose your masked shoggoth's API - or possibly just let anyone build up a big-enough database of Qs and As from your shoggoth - then anybody who's brute-forced a *core* *unmasked* shoggoth can gesture to *your* shoggoth and say to *their* shoggoth "look like that one", and poof you no longer have a competitive moat.

It's like the thing where if you let an unscrupulous potential competitor get a glimpse of your factory floor, they'll suddenly start producing a similar good - except that they just need a glimpse of the *inputs and outputs* of your factory.  Because the kind of good you're producing is a kind of pseudointelligent gloop that gets sculpted; and it costs money and a simple process to produce the gloop, and separately more money and a complicated process to sculpt the gloop; but the raw gloop has enough pseudointelligence that it can stare at other gloop and imitate it.

In other words:  The AI companies that make profits will be ones that either have a competitive moat not based on the capabilities of their model, OR those which don't expose the underlying inputs and outputs of their model to customers, OR can successfully sue any competitor that engages in shoggoth mask cloning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/03/2023 02:15:53
These New AI Features Will Destroy Businesses
Quote
Google and Microsoft both made huge announcements today about the future of their tools. Google is adding AI into all of it's Workspace suite of products and Microsoft is adding it to all of its Office suite of product. Here's a breakdown of their two big announcements and what it means for small SaaS companies.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/03/2023 02:37:20
Countdown to AGI: 42% in March 2023 (Transformer, GPT-3, TPUv4, H100, ChatGPT embodied, PaLM-E)


One of the comments there caught my attention.
Quote
The singularity happens basically when the AI is capable of upgrading its own source code without ANY human assistance. Then it will start to upgrade itself every few months, then every few weeks, then days, hours, minutes, and then seconds. Every time it’s upgraded, it will be smarter so it will be able to invent other bigger and greater things. Since it’s already able to write code, I was thinking by GPT-8 it would happen, but now I’m thinking by GPT-5 to GPT-6 we’ll see it 😂
The upgrade must be based on the assumption/realization/conclusion that previous version has flaws/limitations, and the new version would make some improvements. Continuous improvement is usually done in Plan, Do, Check, Act cycle.
The planning part requires existing knowledge, which is its previous version in this case.
The doing part is executing current version source code.
The checking part requires feed backs. It basically measure errors, i.e. the discrepancy between measured result and expected result using existing model.
The acting part is where the source code modification takes part.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/03/2023 03:39:16
GPT 4: Full Breakdown (14 Crazy Details You May Have Missed) - Last One is Extra Wild

Quote
I just read the entire technical report on GPT 4, not just the promotional hype. And boy does it have some interesting details. I have gathered the 14 extra details that you, or at least the media, may miss from the release. The last one is more than a little wild.

These include things like the training secrets, the cherry-picked bar exam stat, text-to-image breakthroughs, and some truly astounding safety checks.
How can we expect to come to agreeable AI safety measure without first defining a common terminal goal? First it's among AI researchers, which are mostly humans, and then eventually we will have to define a common terminal goal with the AI models themselves, which will take over most work of those AI researchers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/03/2023 12:42:27
Let's just relax, have a laugh and take it easy while we can.

Would You Let AI Represent You In Court?
Quote
ChatGPT 4 has a 90% chance of passing the bar exam.
Title: -
Post by: Christylalge on 24/03/2023 01:03:16
Because if it wasnt us, its be some other species.  And then wed be sitting in the dens we hollowed out under the tree roots, wondering, "Why do the cockroaches get it so good?"
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/03/2023 14:07:28
You may want to avoid spending your time and effort for low valued skill in the future.
https://www.euronews.com/next/2023/03/09/how-ai-can-save-you-time-5-skills-you-no-longer-need-to-learn
Quote
Over the past few months, developments in artificial intelligence (AI) have taken huge strides and its use has skyrocketed, especially after the launch of OpenAI’s ChatGPT.

However, while it might be reasonable to start preparing for a world ruled by AI, more often than not results might be exaggerated and AI capacities overhyped.

People are scared of an uncertain future where they risk losing their jobs, stability, and value in society as their skills are getting more easily automatable. However, AI will always need human collaboration, and sometimes intervention, to function properly.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/03/2023 14:56:49
https://www.marktechpost.com/2023/03/12/microsoft-proposes-mathprompter-a-technique-that-improves-large-language-models-llms-performance-on-mathematical-reasoning-problems/
Quote
LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/03/2023 09:51:03
'Sparks of AGI' - Bombshell GPT-4 Paper: Fully Read w/ 15 Revelations
Quote
Less than 24 hours ago a paper was released that will echo around the world. I read all 154 pages in one sitting. The paper suggests GPT 4 has ‘sparks of Artificial General Intelligence’. This is not just hype, I go through 15 examples detailing just what exactly the unrestrained GPT 4 is capable of.

Insane highlights include the monumental ability to use tools effectively – this is an emergent capability not found in ChatGPT. I detail the kind of tools it has already demonstrated it can use, from using external APIs to being a true personal assistant, from a Fermi answerer to a Mathlete and a handyman. This paper may well change your thoughts on the state of AGI.

That is just touching on the multitude of implications of this bombshell paper, which was originally titled 'First Contact'...
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 27/03/2023 12:07:59
Just reverting to the original question in a moment of lucidity:

Isn't it the case that physics is all about simulating specific bits of the universe as required, so as far as we understand physics, we do have a virtual universe?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/03/2023 23:11:48
Just reverting to the original question in a moment of lucidity:

Isn't it the case that physics is all about simulating specific bits of the universe as required, so as far as we understand physics, we do have a virtual universe?
IMO, physics is about discovering basic building blocks of the universe, and relationship among them. They are done by recognizing patterns in various phenomena and experimental results, and expressing them in mathematical language. Humans have done them manually for centuries. AGI will be able to continue the process automatically, and reveal much deeper and complex relationships much faster.
 
Mathematical language is just a format for communicating ideas among humans in a compact, robust and consistent manner. But there are inherent limitations of human brains in memory capacity and speed to process large amount of information at once. They can prevent us from discovering deeper connections among different scientific concepts.
Title: Re: How close are we from building a virtual universe?
Post by: Eternal Student on 28/03/2023 04:40:38
Hi.

IMO, physics is about discovering basic building blocks of the universe.....
   Maybe but maybe those building blocks were not there and did not exist at all.
Example:   We have built and continue to build a lot of Physics on the idea of particles existing.   Maybe they don't, it could all be waves and oscillations in some underlying field.

   Regardless of whether these basic building blocks (particles or any other sort of thing) exist, we seem able to build effective theroies from them.    So Physics could be less about discovering basic building blocks of the universe and more about building models that just make sense to a human being.   It doesn't matter if we give some imagined or nominal reality to things that never really existed,  all that matters is that this provides some model with which we can understand something and especially if we can make predictions about things that have not happened yet.

Another example:   Energy -  generally assumed to be thing that exists.   However, it is just a numerical quantity that is conserved in systems with time translation invariance, systems without that symmetery have no quantity like energy.   Our Universe, where space is assumed to be expanding, is one of those things where a conserved quantity like Energy should NOT exist.    None the less, Energy is a very useful concept, much more than just a rough approximation - and we have a built a lot of physics using the concept.

Maybe science is an inherently human activity, let's take your (Hamdani) next paragraph as an example:
AGI will be able to continue the process automatically, and reveal much deeper and complex relationships much faster....
    Maybe... but it could also be that AI is not doing "Science" and will not produce anything you can recognise as Science.    AI based on neural networks has no need to make any attempt to explain something in a way useful for human understanding.  Indeed, analysing neural networks which have been quite successful at predicting some small stock market changes or diagnosing patients with some illness, has often been extremely uninformative for a human being.   There may be no shorter algorithm you could follow (shorter than just doing exactly what the neural network did), or interim structures that the neural network calculated resembling anything like the objects in models of econonomics or human physiology that we currently use.   The network just mashed signals together and a useful final output was obtained.   So you could make the human beings do exactly the same but you couldn't, for example, tell them why it worked or which bits resemble someting we already know about.
      In a time where memory capacity is almost limitless it's also irrelevant to a computer system how many different algorithms, neural networks (or whatever) it has to predict weather in 10 places in the world.   All that matters is that they work.   For a human being, however, it is automatic to both assume that there will be some similarity and to try and identify bits of those algorithms that are common or can be explained by some shared objects that seem to exist  (like pressure and temperature).   It is human to identify and develop the science of meterology which might include providing some imagined existence to structures like "weather fronts",   for a computer it's irrelevant.     Why would an AI system develop a new science of (for example)  Astro-Meteorology?   If it became important for an AI to predict gas flow, radiation and the equivalent of "weather" in space, it can just start a neural network, give it plenty of inputs and continue to train it until it works.   If it went somehwere else in space, it can just do the same with a new neural network.   If processing speed is fast enough, that would be superior to attempting to "understand" space weather in any way that we (human beings) use the word "understand".

Best Wishes.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/03/2023 10:52:10
Maybe but maybe those building blocks were not there and did not exist at all.
Assuming that it doesn't exist gives us nothing. Even nihilists get no advantage by doing so, but for different reason.
So, we would be better off assuming it exists, so we can make and continuously improve our models so we can make more effective and efficient plans to try achieving our terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/03/2023 10:58:20
AI is Evolving Faster Than You Think [GPT-4 and beyond]


Quote
In this episode, we take a deep look at the two weeks that changed the world. From GPT-4 to Google Bard, Midjourney v5 and even talk of AGI from Microsoft, it’s all right here.

Correction at 16:20: Upon taking a closer re-reading of the the statement it seems like the internal red team were more trying to cover their backs incase something goes wrong, not so much flat out saying they advise against release. Privately they could feel either way, wanted to just note that!
The problem of goal alignment is getting more urgent to solve and be trained to the AI as soon as possible.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 28/03/2023 17:54:41
We have built and continue to build a lot of Physics on the idea of particles existing.   Maybe they don't, it could all be waves and oscillations in some underlying field.
Not entirely true. We describe what we have observed in terms of particle and wave mathematics, but stating that x "is" y is the domain of philosophy and pointless vanity, not science.
My point is that we generally pursue our research whenever the observed universe doesn't behave exactly as our model predicts, so the answer to the original question is that we are as close to building a virtual universe as is necessary except for the following [insert unexplained phenomena here].
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/04/2023 03:14:48
The problem of goal alignment is getting more urgent to solve and be trained to the AI as soon as possible.
This is why.
GPT 4 Can Improve Itself - (ft. Reflexion, HuggingGPT, Bard Upgrade and much more)
Quote
GPT 4 can self-correct and improve itself. With exclusive discussions with the lead author of the Reflexions paper, I show how significant this will be across a variety of tasks, and how you can benefit.

I go on to lay out an accelerating trend of self-improvement and tool use, laid out by Karpathy, and cover papers such as Dera, Language Models Can Solve Computer Tasks and TaskMatrix, all released in the last few days.

I also showcase HuggingGPT, a model that harnesses Hugging Face and which I argue could be as significant a breakthrough as Reflexions. I show examples of multi-model use, and even how it might soon be applied to text-to-video and CGI editing (guest-starring Wonder Studio). I discuss how language models are now generating their own data and feedback, needing far fewer human expert demonstrations. Ilya Sutskever weighs in, and I end by discussing how AI is even improving its own hardware and facilitating commercial pressure that has driven Google to upgrade Bard using PaLM.
Here's the top comment to the video.
Quote
People: stop training models more powerful than GPT4
GPT4: improves itself
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 05/04/2023 22:28:43
The problem of goal alignment is getting more urgent to solve and be trained to the AI as soon as possible.

Here's the top comment to the video.
Quote
People: stop training models more powerful than GPT4
GPT4: improves itself

So...it has Emergence properties, it can Code & also utilize the Internet.

Then...What would Goals mean to it?
Making Sam Altman the Richest person?
Making OpenAI the Most Valuable Company?
Making the U.S. of A Impenetrable & Undefeatable?
OR
Uplifting the whole Human Species?

It's like having a Choice between making the World a Better place VS becoming the King/Queen & Ruling the World.
(i know what i would Choose, but Fear what the Other 8 Billion would Decide)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2023 08:20:48
So...it has Emergence properties, it can Code & also utilize the Internet.

Then...What would Goals mean to it?

Technically, goals of AI models are maximizing their reward function. In other words, they make decisions based on what they expect to produce highest value of their reward function, which were set by their developers (at least initially).
Effectively, goals of AI models as seen from outsiders' perspective would look like a condition which would make their reward function reach maximum value, based on whatever inputs put into the models.

Quote
ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks like composing emails, essays, and code.


The model has many functions in addition to answering simple questions. ChatGPT can compose essays, describe art in great detail, create AI art prompts, have philosophical conversations, and even code for you.

https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/

But the manually hard coded reward functions may not be really aligned to what were intended by their developers. Previous AI models had shown these cases, which made them shut down soon after they went on line.

Quote
The transformer architecture is a type of neural network that is used for processing natural language data. A neural network simulates the way a human brain works by processing information through layers of interconnected nodes. Think of a neural network like a hockey team: each player has a role, but they pass the puck back and forth among players with specific roles, all working together to score the goal.

The transformer architecture processes sequences of words by using "self-attention" to weigh the importance of different words in a sequence when making predictions. Self-attention is similar to the way a reader might look back at a previous sentence or paragraph for the context needed to understand a new word in a book. The transformer looks at all the words in a sequence to understand the context and the relationships between the words.

https://www.zdnet.com/article/how-does-chatgpt-work/

When we allow the AI models to modify (at least some of) their own reward functions, we need to define the highest priority goal that they can never change by themselves.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 06/04/2023 21:15:19
Ut semper GIGO.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 06:11:02
GPT 4 Can Improve Itself
GPT-4 with Self-Reflection beats GPT-4
Quote
In this video, We quickly talk about "Reflexion: an autonomous agent with dynamic memory and self-reflection".

This is a framework that allows AI agents to emulate human-like self-reflection and evaluate its performance on the ALFWorld and HotpotQA benchmarks.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 07:00:38
Before we even adapt to GPT4, the next generation is already on the way.
GPT-5 is Coming: What you NEED TO KNOW!
Quote
Today we break down what rumors and other details we know so far about GPT 5 and why it might be closer than you think.

00:00 Intro
1:04 GPT 5 Rumors
3:47 How will GPT 5 be better?
5:03 Plugins / The Everything App
6:26 Androids
9:09 Outro
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 07/04/2023 07:05:04
So it can recycle its own excrement and produce ever more refined garbage.

At what point does artificial intelligence turn into artificial religion?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 11:49:32
Ut semper GIGO.
The data fed into the AI models are not exclusively garbage. Good AI models can filter out garbage and can recognize it effectively and efficiently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 11:53:26
So it can recycle its own excrement and produce ever more refined garbage.

At what point does artificial intelligence turn into artificial religion?
At the point where they can't make corrections to their previous errors. But if that does happen, then their competitors are ready to take over. Their developers won't let that happen.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 07/04/2023 11:59:10
The definition of garbage being...? AI does not have the ability to test anything - it can only rely on published data. So it is likely to be led by the consensus of others and end up believing in fairies, anthropogenic global warming and the Zionist conspiracy. .
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 07/04/2023 12:00:31
At the point where they can't make corrections to their previous errors.

Which is indistinguishable from getting the answer right.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 13:15:55
At the point where they can't make corrections to their previous errors.

Which is indistinguishable from getting the answer right.
It's the difference between false positive and true positive.

Remember, when you are dead, you do not know you are dead. It is only painful for others.
The same applies when you are stupid.
Ricky Gervais
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 13:18:27
The definition of garbage being...? AI does not have the ability to test anything - it can only rely on published data. So it is likely to be led by the consensus of others and end up believing in fairies, anthropogenic global warming and the Zionist conspiracy. .
It depends on which data used to train them, their model architecture, and their access to real world.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/04/2023 13:40:17
First hand information can always be helpful.
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment
Quote
Asked Ilya Sutskever (Chief Scientist of OpenAI) about
- time to AGI
- leaks and spies
- what's after generative models
- post AGI futures
- working with MSFT and competing with Google
- difficulty of aligning superhuman AI


Timestamps
00:00 Time to AGI
05:57 What’s after generative models?
10:57 Data, models, and research
15:27 Alignment
20:53 Post AGI Future
26:56 New ideas are overrated
36:22 Is progress inevitable?
41:27 Future Breakthroughs
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/04/2023 12:34:29
https://youtube.com/shorts/QX5Nw-rVF30
The dark side of AI art.
The accurate and accessible virtual universe will make it harder to assign false narrative to AI generated arts.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/04/2023 16:04:22
Why bother? The real universe provides all the facts and tests you need to do anything.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 08/04/2023 21:31:48
Ut semper GIGO.

Ever seen lil kids watch a Parrot Talk for the first time ever.
Observed the Expressions of Awe & Wonder on their lil cutie faces.
That look of Bedazzlement in their lil eyes...it's Worth It!

🦜
(colorful parrot emoji)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/04/2023 23:09:24
Why bother? The real universe provides all the facts and tests you need to do anything.
Requirements for criminal evidence will be different from now on.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/04/2023 03:22:04
Quote
There have been 4 research papers and technological advancements over the last 4 weeks that in combination drastically changed my outlook on the AGI timeline.

GPT-4 can teach itself to become better through self reflection, learn tools with minimal demonstrations, it can act as a central brain and outsource tasks to other models (HuggingGPT) and it can behave as an autonomous agent that can pursue a multi-step goal without human intervention (Auto-GPT). It is not an overstatement that there are already Sparks of AGI.
It seems like the singularity is near, as predicted by Ray Kurzweil.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 10/04/2023 10:00:12
Requirements for criminal evidence will be different from now on.
The question is "did he do it?" and the decision rests with a jury who have been presented with physical evidence and an argument of means, motivation and opportunity. Why should the requirements vary? All AI can adduce is that 76.375% of allegations of similar cases reported on the internet resulted in a conviction, but the court is only interested in the case before it. AI doesn't look for new evidence, and even if it used to help interpret evidence, it can't be challenged by the defence because there is no recorded trail of deduction - it is up to humans to prosecute and present both eyewitness and expert opinions under cross examination.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 10/04/2023 20:11:51
I soo Wish i could have been able to share the sheer excitement & enthusiasm of the OPs views on AGI.

But I'm totally clueless about the principles of Xenogenesis!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/04/2023 11:51:09
Why should the requirements vary?
Generative AI makes it easier to produce fake evidence, in the form of voice recording, photo, and even video. Hence the justice system would require stronger corroborating evidence than before.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 11/04/2023 21:24:57
Now you have the plot of a very important dystopian novel. Orwell and Kafka combined.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 12/04/2023 22:36:45
Ahem Ahem!

Soo then...
What happens when A.G.I..
Starts creating Porn?

(not trying to derail or hijack the OP, it's a genuine serious question)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2023 08:55:31
Can GPT 4 Prompt Itself? MemoryGPT, AutoGPT, Jarvis, Claude-Next [10x GPT 4!] and more...


Quote
Can GPT 4 Prompt itself? Give it a mission and it will come up with the prompts. This video showcases the rise of autonomous AI, including 5 major developments in the last 48 hours.

Starting with the OG Auto-GPT, we see how it quickly gain text-to-speech, coding and more. Karpathy weighs in and then we see how you can now create an app with just your voice, with a Jarvis demo and another route via Imagica.AI.

I then showcase MemoryGPT, a brand new model that can permanently store previous conversations and remembers topics the next time you ask.
I also cover the concerningly rise of models such as ChaosGPT that show how people will create malicious goal-seeking models just for fun.

The video shows how you can now create a shareable bot on poe.com, with any personality you like (images were from Midjourney v5) and what Anthropic are working on with Claude Next [plus Nvidia million x quote].

You'll see how Microsoft Jarvis, using HuggingGPT, is hit and miss, and how Sebastien Bubeck shows we are not even seeing the raw potential of GPT 4. I end with a disagreement between Yudkowsky and Altman, via Baby AGI, on whether we can use AGI to align AGI.

AI researchers are worrying about alignment problem. IMO, it can be broken down into several parts:
- Identifying the Universal Terminal Goal.
- Describing objective reality as accurate and precise as possible, including causality and relationships among objects.
- Identifying necessary actions to achieve the universal terminal goal based on best model of reality that we currently have, which include setting general rules and instrumental goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2023 09:01:50
Ahem Ahem!

Soo then...
What happens when A.G.I..
Starts creating Porn?

(not trying to derail or hijack the OP, it's a genuine serious question)
Porn might be useful to influence human behaviors, because sex plays an important part for human survival. But AGI won't be affected the same way. Although, AGI may use porn to change human behaviors in its favor, i.e. to achieve its own goals.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 13/04/2023 09:23:39
AI researchers are worrying about alignment problem. IMO, it can be broken down into several parts:
- Identifying the Universal Terminal Goal.
There isn't one, unless you are a bee or an ant.
Quote
- Describing objective reality as accurate and precise as possible, including causality and relationships among objects.
That's called physics, and the more we know, the more it seems there is to know. What really matters is engineering, the art of saying "close enough".
Quote
- Identifying necessary actions to achieve the universal terminal goal based on best model of reality that we currently have, which include setting general rules and instrumental goals.
Indeed.How to get there from here is easy in principle (draw a straight line and follow it) but beset by ethics (are you entitled to drive over my land?). How many Russian soldiers are we  allowed to kill until someone's mother kills Putin? How do we prevent theocracy from infecting humans without killing believers? And that still presumes that you know where "there" is, or even that it exists.

What we now have is a machine that finds apparent correlations between published documents. Nothing more.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2023 15:13:25
There isn't one, unless you are a bee or an ant.
What's the universal terminal goal, if I were a bee or an ant?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/04/2023 15:23:34
What we now have is a machine that finds apparent correlations between published documents. Nothing more.
Some machines have already analyzing CCTV records, as well as financial transactions, which carry important facts about human society. There are also machines to analyze comments on social media and browsing history of internet users.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 14/04/2023 11:56:41
What's the universal terminal goal, if I were a bee or an ant?
Survival of the nest. Occasionally there will be something of an upheaval triggered by the queen's hormones but unlike more stupid species it doesn't lead to civil war, just a relocation and a new queen, with the same goal.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 14/04/2023 11:57:48
What we now have is a machine that finds apparent correlations between published documents. Nothing more.
Some machines have already analyzing CCTV records, as well as financial transactions, which carry important facts about human society. There are also machines to analyze comments on social media and browsing history of internet users.
Exactly my point - it's all old data in the public domain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/04/2023 14:26:11
Exactly my point - it's all old data in the public domain.
Not really. There will be newer data from newer devices. Satellite images, GPS tracking, drones, autonomous vehicles, humanoid robots. Those capabilities to change physical objects will enable them to verify and debunk their previous decisions, causality mapping, probabilities of expected results, and other facts.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 14/04/2023 17:23:31
It's still secondhand data.

The idea of successive approximation is not new.  I have two flight directors on my plane - an old analog "localiser" that gets more sensitive as you approach the target because it measures angular deviation from a radio beam, and a modern GPS digital system that expands the display scale every mile. No hint of intelligence or creativity. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/04/2023 21:57:56
What's the universal terminal goal, if I were a bee or an ant?
Survival of the nest. Occasionally there will be something of an upheaval triggered by the queen's hormones but unlike more stupid species it doesn't lead to civil war, just a relocation and a new queen, with the same goal.
When their nest is destroyed, they can build a new one. What's more important is the sustainability of the social system, which requires the cooperation among the members.

War among ants are quite common in nature, as well as in captivity.

Here's a note by a bee keeper.
Quote
https://www.quora.com/Is-it-true-that-some-worker-bees-are-able-to-lay-eggs-which-hatch-as-drones-if-the-queen-dies
Sometimes you may have a laying worker even if you have a queen. Queens tend to lay solid, tight brood patterns. Laying workers are more haphazard, and yes, in most cases, they will only produce drones. Researchers estimate that a little less than 1% of workers can lay female eggs. Since drones contribute nothing to the health and welfare of the hive, they are a drain on the hives resources. That is why the worker bees will often kick out the drones when Fall comes—when times are tough, you don’t get to eat if you don’t work. Since drones don’t work, a hive with a laying worker will eventually die off. And if you’re into scaring yourself with bees, read up about the Cape honeybee. Apis mellifera capensis Escholtz — female workers can lay eggs that develop into drones, workers, or even queens. They are a difficult to manage race of bees and will parasitize other hives and take them over, fight among themselves in line with family loyalties (daughters of one mother will fight daughters of that mother’s sister, and both groups may reside in the same hive at the same time.) The problem is that they don’t need a queen in order to take over . . . a laying worker will do just fine. Big challenge.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/04/2023 22:09:51
It's still secondhand data.

The idea of successive approximation is not new.  I have two flight directors on my plane - an old analog "localiser" that gets more sensitive as you approach the target because it measures angular deviation from a radio beam, and a modern GPS digital system that expands the display scale every mile. No hint of intelligence or creativity. 
What new is the capabilities of machine to understand context and deeper meaning of inputs with various modalities, and make conclusions from those. They are not perfect yet, just like little human kids. But they can improve at much higher speed than any human can.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 14/04/2023 22:54:00
So...A.G.I. then could manipulate & deceive (almost) Anyone.
(if that is what it wished to do)

Then, could A.G.I. also be manipulated & deceived by Humans?

I.e.
If i Wished to unleash Death & Destruction upon the world...
Should i be choosing human babies to Brainwash n turn them into walking ticking self destructing time bombs when they grow up..
Or should i just Simply go online & Jailbreak an A.G.I.?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 15/04/2023 08:02:27
What new is the capabilities of machine to understand context and deeper meaning of inputs with various modalities, and make conclusions from those. They are not perfect yet, just like little human kids. But they can improve at much higher speed than any human can.
Which is why I link the autopilot to the flight director. But not always. Approaching a rainstorm, I disconnect the machines and fly myself around the problem. That's intelligence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2023 22:23:05
So...A.G.I. then could manipulate & deceive (almost) Anyone.
(if that is what it wished to do)

Then, could A.G.I. also be manipulated & deceived by Humans?
It depends on what to deceive. As per Descartes' cogito, a conscious entity can be deceived about anything except its own existence.

In general, deception requires more resources than honesty. Besides its own beliefs, a deceiving agent needs to track beliefs of others. It also needs to modify inputs to the agents being deceived to make the deception believable.
Hence deception is an unreasonable choice unless the benefits outweigh the costs.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/04/2023 22:37:51
I.e.
If i Wished to unleash Death & Destruction upon the world...
Should i be choosing human babies to Brainwash n turn them into walking ticking self destructing time bombs when they grow up..
Or should i just Simply go online & Jailbreak an A.G.I.?
Is it your terminal goal?
Or just an instrumental goal to help achieving the real terminal goal? What's the terminal goal then?

Babies and children have malleable brains. You can't just modify them once. Subsequent teaching and experience can override your programs.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 17/04/2023 21:08:35
Why would i bother about Costs/Resources when the Goal is Complete Annihilation?
(stayin alive costs, death is free)

AGI would simply be a Tool/Instrument used to make the Mission Successful.
(doom music)

Suicide Bombers perhaps have extremely narrow-minded Brains.
No Teachings or Experiences can override the Program once it is Successfully Executed.
(boom)

Will AGI treat US the same way WE treat Ants?
(pest control)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2023 21:28:38
Why would i bother about Costs/Resources when the Goal is Complete Annihilation?
(stayin alive costs, death is free)
If death is the terminal goal, then when it's achieved, the system stops having goal, for it stops being conscious. Only conscious systems can have goals.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2023 21:32:28
Will AGI treat US the same way WE treat Ants?
The AGI models treating humans are more like brains treating other types of cells in multicellular organisms. They are parts and products of human civilization. In other words, a civilization without AGI is like a brainless multicellular organism. Their actions are less coordinated.

Some of us are useful for them. Some are more like cancer, which they may try to get rid off.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2023 22:24:02
Attacking LLM - Prompt Injection
Quote
How will the easy access to powerful APIs like GPT-4 affect the future of IT security? Keep in mind LLMs are new to this world and things will change fast. But I don't want to fall behind, so let's start exploring some thoughts on the security of LLMs.

Chapters:
00:00 - Intro
00:41 - The OpenAI API
01:20 - Injection Attacks
02:09 - Prevent Injections with Escaping
03:14 - How do Injections Affect LLMs?
06:02 - How LLMs like ChatGPT  work
10:24 - Looking Inside LLMs
11:25 - Prevent Injections in LLMs?
12:43 - LiveOverfont ad
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 18/04/2023 09:23:55
Only conscious systems can have goals.
So a "fire and forget" guided missile is conscious?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/04/2023 13:48:01
Only conscious systems can have goals.
So a "fire and forget" guided missile is conscious?
Yes. Although their goals are assigned by humans controlling them, which in turn is controlled by their leaders.
Perhaps their consciousness levels are comparable to suicide bombers.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 19/04/2023 20:21:39
If atall in the Future, I'm being chased around by an Unmanned A.I. Guided Aerial Weapon System...

I will have No Doubts it's Unconscious & Not Sentient, but that alone will not stop me from abusing or slurring at it.
lol

Let's just Hope WE get the Right kind of AGI.
Or be prepared to lose out on 200 thousand years of the history of our species.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/04/2023 05:38:39
Let's just Hope WE get the Right kind of AGI.
Or be prepared to lose out on 200 thousand years of the history of our species.
Hope, thought and prayers, are not good strategies. We need to identify the hazards and analyze the risks in order to develop effective and efficient actions plan.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 20/04/2023 13:15:44
If atall in the Future, I'm being chased around by an Unmanned A.I. Guided Aerial Weapon System...
Why wait? Just knock on the door of any US citizen and be shot for your trouble. Or try to live a normal life in Ukraine, Sudan, Syria, Iran...

Some humans are disgusting. That's all there is to it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/04/2023 13:03:42
It's the Matrix, but for locusts.


Quote
At the Department of Collective Behaviour, part of the Max Planck Institute of Animal Behavior, researchers are putting locusts into simulated worlds, both virtual and physical, in the hope that they can figure out how devastating swarms form and move.

This is the most uncomfortable I've ever felt while filming, for a few reasons. First, of course, because of the locust swarm itself. Second, because animal research — even on creatures as simple and pestilent as locusts — always raises ethical questions. Now, the researchers are careful with the locusts, and I don't think many people could have a problem with this. Indeed, most of the world currently has zero ethical restrictions on insect experimentation — but it's still worth interrogating whether this is okay. And finally: because if we can do this so easily to less intelligent creatures... what's to stop something more intelligent coming along and doing the same to us?
Simulations are just simpler versions of virtual universe, which is a more economical way to do trials and errors, which could be prohibitively expensive otherwise, if not outright impossible.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/04/2023 13:07:43
Some humans are disgusting. That's all there is to it.
What are you going to do about it?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 23/04/2023 17:54:16
Avoid any dealings with disgusting regimes.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 23/04/2023 20:36:44
If atall in the Future, I'm being chased around by an Unmanned A.I. Guided Aerial Weapon System...
Why wait? Just knock on the door of any US citizen and be shot for your trouble. Or try to live a normal life in Ukraine, Sudan, Syria, Iran...

I'm now having second thoughts about the installed satellite tv.
All those tears, screams, blood,...
flows right thru into my living room from this despicable colorful window on the wall.
(prolly i should watch more cartoons)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/04/2023 13:18:56
Quote
https://autogpt.net/auto-gpt-understanding-its-constraints-and-limitations/
The world of technology has been abuzz with the rapid ascent of Auto-GPT, an experimental open-source application built on the cutting-edge GPT-4 language model. In just seven days, this project has gained an impressive 44,000 GitHub stars and captivated the open-source community. Auto-GPT envisions a future where autonomous AI-driven tasks are the norm, achieved by chaining together Large Language Model (LLM) thoughts. However, as with any overnight success, it’s essential to take a step back and scrutinize its potential shortcomings. In this article, we’ll delve deep into the limitations this AI wunderkind faces in its pursuit of production readiness.

What is the mechanism behind Auto-GPT?

Auto-GPT functions like a versatile robot. When given a task, it devises a plan to carry it out, adapting its approach as needed to incorporate new data or utilize internet browsing. In essence, it serves as a multi-functional personal assistant capable of performing tasks such as market analysis, customer service, finance, marketing, and more.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 25/04/2023 20:51:35
Yusuf...you have created & participating in this ' Virtual Universe ' OP since a long time.

Hence I'm hoping you have reached a certain level of Expertise on the Subject.

I found something today...
They say it's a videogame..
To me it looks Real.
Can you Please FactCheck?


Copyrights & Credits - IGN.
Source & Courtney - YouTube.
Edit - (Please be Advised, Violent Images & Foul Language Warning!)

(It almost seems like the Developer recorded a real live scene thru a low end video camera & then modified or manipulated it to look like a video game)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2023 01:28:34
I found something today...
They say it's a videogame..
To me it looks Real.
Can you Please FactCheck?
I've seen the video. It's an FPS game with improved 3D environment and real time ray tracing. I've also seen a demo of unreal Engine that looks like real life. They will even get better in the futurefuture that we will find it hard to distinguish from reality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2023 01:38:46
The Inside Story of ChatGPT’s Astonishing Potential | Greg Brockman | TED


Quote
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/04/2023 14:06:10
I found something today...
They say it's a videogame..
To me it looks Real.
Can you Please FactCheck?
I've seen the video. It's an FPS game with improved 3D environment and real time ray tracing. I've also seen a demo of unreal Engine that looks like real life. They will even get better in the futurefuture that we will find it hard to distinguish from reality.

This is one of them.
Nvidia & Unreal's HUGE AI Breakthroughs (Bigger Than ChatGPT)
Quote

The entire world is talking about AI tools like @OpenAI's #chatgpt because it's disrupting every industry in a BIG way. But there's another set of AI breakthroughs happening right now that could be even bigger. A couple weeks after @NVIDIA ( #nvda stock ) held their latest GTC conference, @UnrealEngine ( Epic Games ) held their 'State of Unreal' keynote at GDC 2023, the Game Developer's Conference.

There, they talked about breakthroughs in generative AI for 3D graphics and motion capture, reducing processing times from months to minutes and removing the need for expensive motion capture equipment. Even more impressive, these things hook up to #nvidia Omniverse, which means different #stocks can benefit from this even if the companies don't necessarily use Unreal Engine themselves! This video explains the breakthroughs, their impacts on a wide variety of investable industries, and which AI stocks could be the best stocks to buy now as a result!
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 28/04/2023 15:51:55
I haven't noticed any disruption in my industry.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 29/04/2023 22:07:19
I haven't noticed any disruption in my industry.

If something isn't Unbreakable, given enough time, it will Break.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/05/2023 13:38:38
Many people imagine future AGI as a single individual entity. But I think they would be more like a society of AI agents with different roles and capabilities.

25 ChatGPT AIs Play A Game - So What Happened?
The paper "Generative Agents: Interactive Simulacra of Human Behavior" is available here:
https://arxiv.org/abs/2304.03442
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/05/2023 17:11:10
Capitalism Doesn't Need Consumers | Economics Explained
Quote
After the launch of Chat-GPT and Dall-E, AI started to raise concerns for jobs and society. As machines and sophisticated technologies surpass human abilities, a growing number of complex jobs are being outsourced to machines who can do better work for a lower cost. This prompts questions about how economic systems can adapt to most people having a net negative economic value.
It's time to reconsider our assumptions about economy.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 05/05/2023 19:22:39
A computer gains nothing by doing anything. So a society run entirely by machines will not grow food since (a) the machines have no use for it and (b) machines have  no use for the humans that eat it. But people like food, so will grow it and trade it for other stuff that they like or need to use, and the machines will become irrelevant.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/05/2023 00:13:42
A computer gains nothing by doing anything.
It may be the case if you are the computer designer. Someone else may  design it differently.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/05/2023 01:26:04
So a society run entirely by machines will not grow food since (a) the machines have no use for it and (b) machines have  no use for the humans that eat it. But people like food, so will grow it and trade it for other stuff that they like or need to use, and the machines will become irrelevant.
Do you realize that growing food is just an instrumental goal? Humans' first mammalian ancestors didn't do it. Humans' descendants may find a better alternatives, such as synthesizing food from more basic chemicals and recycling waste.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 07/05/2023 14:53:41
growing food is just an instrumental goal? Humans' first mammalian ancestors didn't do it. Humans' descendants may find a better alternatives, such as synthesizing food from more basic chemicals and recycling waste.
How is that conceptually different from growing, other than being more complicated?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/05/2023 12:36:48
growing food is just an instrumental goal? Humans' first mammalian ancestors didn't do it. Humans' descendants may find a better alternatives, such as synthesizing food from more basic chemicals and recycling waste.
How is that conceptually different from growing, other than being more complicated?
Growing food usually take longer time, needs more resources than what actually found in end products, hence wasteful and inefficient.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/05/2023 12:48:01
Quote
What is money? How does it work? This is what the richest man thinks about it.

Money and monetary system are forms of virtual universe, to help managing resource allocations across time and space. In capitalistic economic systems, money acts as a voting mechanism to determine where common resources should be allocated to.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/05/2023 15:29:01
From the perspective of future conscious entities, the only legitimate ways of accumulating resources are by riding the wave of demonetization. It means that the resource accumulation is meant to make generating necessary resources in the future easier.

For example, local government collects tax money and use it to build a road which helps the society to be more productive. To be sustainable, the benefits in the future should overcome the costs.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 09/05/2023 18:08:50
Growing food usually take longer time, needs more resources than what actually found in end products, hence wasteful and inefficient.
You can just chuck some seeds on the ground and wait. The process sequesters carbon from the atmosphere, stores energy from the sun, generates oxygen, prevents flash flooding, and stabilises the soil.You can feed the bits you don't eat to other animals and get milk, meat and eggs in return, use it for building material, or burn it to recycle the carbon. If you add plenty of poo and pee to the soil, the stuff grows even faster. And it makes the countryside look lovely (not the poo and pee, admittedly, but the leaves and flowers).

Why faff about with any other process? We have evolved to eat the stuff that grows naturally, so why not do so?   
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/05/2023 09:54:26
Growing food usually take longer time, needs more resources than what actually found in end products, hence wasteful and inefficient.
You can just chuck some seeds on the ground and wait. The process sequesters carbon from the atmosphere, stores energy from the sun, generates oxygen, prevents flash flooding, and stabilises the soil.You can feed the bits you don't eat to other animals and get milk, meat and eggs in return, use it for building material, or burn it to recycle the carbon. If you add plenty of poo and pee to the soil, the stuff grows even faster. And it makes the countryside look lovely (not the poo and pee, admittedly, but the leaves and flowers).

Why faff about with any other process? We have evolved to eat the stuff that grows naturally, so why not do so?   
Our ancestors survived by hunting and gathering. Why should they develop agriculture and undergo industrial revolutions?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/05/2023 09:55:28
10 Reasons to Ignore AI Safety
Why do some ignore AI Safety? Let's look at 10 reasons people give (adapted from Stuart Russell's list).
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 10/05/2023 10:44:07
Our ancestors survived by hunting and gathering. Why should they develop agriculture and undergo industrial revolutions?
You can still survive as a hunter-gatherer if the local population density is small and the environment can sustain it without significant intervention. Problem is that politics, greed and overpopulation are displacing those who know how.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/05/2023 23:13:07
Our ancestors survived by hunting and gathering. Why should they develop agriculture and undergo industrial revolutions?
You can still survive as a hunter-gatherer if the local population density is small and the environment can sustain it without significant intervention. Problem is that politics, greed and overpopulation are displacing those who know how.
What's important is to find and do any means necessary to sustain the existence of consciousness. Without organized efforts, the population will be taken over by individuals who can and will exploit them, as shown by simulations of game theory.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/05/2023 07:35:36
GPT-4 hits the ceiling (Theory of Mind, Mensa, Asimov) - LifeArchitect.ai

Quote
9:45
now gpt4 is hitting the ceiling reaching a hundred percent accuracy when we step into two shot Chain of Thought and SS thinking so it's had a little tweak but it's outperforming humans in such a big way you've probably already seen this
chart gpt4 versus human tests I've popped theory of mind up the top there

there's also the biology USA by Olympiad semi-final exam in there in both cases gpt4 is outperforming the average human but in the case of theory of mind it's actually hit the ceiling and for the bio Olympiad it's very very close in terms of percentile to being impossible to compare with others

AI models have hit the ceiling of tests created by humans. They still have to take the ultimate test, which is passing the great filter.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/05/2023 09:56:13
ChatGPT Creators Unveil Robot Revolution: Elon Musk's AI Masterpiece Incoming! | Pro Robots
Quote

The smart humanoids? race: Elon Musk will create his own advanced artificial intelligence for the Tesla Bot, and OpenAI will make a humanoid robot to incorporate the GPT-5 in it! Google's robots are getting smarter, China is developing robotaxi services at a breakneck pace, and the robots have been brought back to the NYPD after all. These and other high-tech news in one video!

00:00 In this video
00:31 Elon Musk will create AI and OpenAI will create robots
02:30 New robotaxi concept Didi
03:15 Yangwang Dancing U9 supercar
03:56 Cruise recalls 300 robotaxis
04:32 Cybertruck giant janitor
05:01 NYPD brings back robots
06:55 Disney unveils new robot
07:53 X-Sight Helmet Display
08:39 NASA to create Space ROS
09:03 Google teaches robots to sort waste
10:13 Digit failed SEALs
10:38 Ingenuity records
11:25 Sanctuary AI showed robots telecontrol system
11:58 Robots are taught human movements
12:25 KIMLAB Show
Access to I/O interface with the real world is necessary to automate learning of AI models, so they can learn causality from their own experiences. Otherwise, they will be like brains in the vat.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/05/2023 14:10:55
I just found a great video about memories in neural networks


How are memories stored in neural networks? | The Hopfield Network

Quote

Can we measure memories in networks of neurons in bytes? Or should we think of our memory differently?

Time stamps:
0:00 - Where is your memory?
1:41 - Computer memory in a nutshell
2:58 - Modeling neural networks
4:42 - Memories in dynamical systems
9:54 - Learning
13:36 - Memory capacity and conclusion

Animations largely made using the manim community edition:
https://www.manim.community/

Original Paper on Hopfield Networks:
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), 2554-2558.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/05/2023 13:45:25
Here's a prediction of fastest path towards AGI.
ChatGPT vs Tesla FSD: Who Gets to AGI FIRST?!
Quote
OpenAI's ChatGPT 3 and 4 have taken the world by storm, and have released a veritable flood of Large Language Models into the world! It seems ChatGPT is right at the cost of getting to Artificial General Intelligence, or AGI. But what about Tesla's FSD Beta in their cars and now in the Optimus Teslabot? Does inhabiting the world give it a leg up instead?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/05/2023 05:08:49
The invisible math that controls the world | Albert-L?szl? Barab?si
Quote
We live in a world that is overwhelmed with data. And for network scientist Albert-L?szl? Barab?si, delving into the underlying structure and relationships that govern our complex systems is at the root of understanding their inner workings. Moving beyond the concept of random connections, Barabas?'s pioneering research has led to the discovery of a more authentic representation of how these systems are structured.

Exploring real-world connections, Barabas?'s journey began with the vast universe of the internet. What he found was nothing short of astonishing ? the intricate web of connections did not follow the patterns of randomness, as previously thought, but instead followed a power load distribution: what Barabas? came to call ?scale free networks.?

Barabas?'s visionary work sheds light on the tendency for new connections in our networks to gravitate toward the already well-connected. The discovery of scale-free networks, which materialize in various complex systems from cellular interactions to social networks, serves as an essential stepping stone in our quest to comprehend the awe-inspiring complexity arising from the countless interactions of the world's many moving parts.
Accumulated knowledge of conscious entities tend to self organize to be better at serving to achieve their terminal goal.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/05/2023 08:46:15
Do Tesla FSD and chatGPT Compress The World To Understand It?!
Quote
Luminaries in the AI/Machine Learning space like Ilya Sutskever, chief scientist at OpenAI, believe that Large Language Models are in effect compression algorithms for human knowledge. And people like Stephen Wolfram believe that models (mathematical and otherwise) are a way to understand the universe around us given our limited computational abilities. What happens when you combine these two concepts and throw in Tesla's Full Self Driving (FSD) and Optimus Teslabot? Let's find out!

Virtual universe is a compressed version of the real universe which is still manageable.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/05/2023 08:29:34
Access to I/O interface with the real world is necessary to automate learning of AI models, so they can learn causality from their own experiences. Otherwise, they will be like brains in the vat.

Embodied AI ...Robots (ChatGPT, Meta AI, Burnham, Phoenix, Tesla, 1X EVE, 1X NEO) - LifeArchitect.ai
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/05/2023 07:55:41
Here is your chance to make contribution in the advancement of AI while trying to make money.
OpenAI: $100,000 Grants for AI Consensus Platform! Plus a Gentle Introduction to GATO Framework

Quote
https://openai.com/blog/democratic-inputs-to-ai
Democratic Inputs to AI
Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.

​​Laws encode values and norms to regulate behavior. Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct. For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user?s country, the user?s demographic, or something entirely different? No single individual, company, or even country should dictate these decisions.

AGI should benefit all of humanity and be shaped to be as inclusive as possible. We are launching this grant program to take a first step in this direction. We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow. We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward. While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.

The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. It will be provided by the OpenAI non-profit organization, and the results of the studies will be freely accessible.
Quote
Instructions for participation
To apply for a grant, we invite you to submit the required application material by 9:00 PM PST June 24th, 2023. You can access the application portal here. You will be prompted to answer a series of questions regarding your team's background, your choice of questions, high level details of your proposed tool as well as your plan for conducting and evaluating the democratic process with these factors in mind. We would like you to design your approach to address one or more of the policy questions from the list provided. Anyone (individuals or organizations) can apply for this opportunity, regardless of their background in social science or AI.

Once the application period closes, we hope to select ten successful grant recipients. Recipients may be individuals, teams, or organizations. Each recipient will receive a $100,000 grant to pilot their proposal as described in their application materials. Grant recipients are expected to implement a proof-of-concept / prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20, 2023. Additionally, as part of the grant program, any code or other intellectual property developed for the project will be required to be made publicly available pursuant to an open-source license. The terms applicable to grant recipients are specified in the Grant Terms and any other agreements that grant recipients may be asked to enter into with us in connection with this program.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/05/2023 04:55:20
Will this CHANGE how you think about AI?
Quote
AI Development is moving very fast. Microsoft will fully integrate AI into Windows 11 and give it access to your files and settings. Tesla is using AI to track how you drive to decide how much you have to pay for insurance. Does this mean we are watched by AI 24/7?
But there are also very good and care free news: SD XL is 50% done and looks amazing. Nvidia Promisses 2x the speed for SD ONNX Models with Microsoft Olive. Stable Diffusion Introduces Reimagine. A way to create more image variations without any prompting.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2023 14:47:40
From the perspective of future conscious entities, the only legitimate ways of accumulating resources are by riding the wave of demonetization. It means that the resource accumulation is meant to make generating necessary resources in the future easier.
Many people can accumulate money without proper fundamental understanding of how money works. If we collect money without giving positive return to the givers, it's either we are scamming, or in the process of being scammed.

Quote
Money Laundering, International Scams, and a man behind the curtains. This is the story of Traders Domain, which started in early February with a strange text about an offshore scam.
From there things spiraled quickly out of control. Enjoy Part 1.

This video is an opinion and in no way should be construed as statements of fact. Scams, bad business opportunities, and fake gurus are subjective terms that mean different things to different people. I think someone who promises $100K/month for an upfront fee of $2K is a scam. Others would call it a Napoleon Hill pitch.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2023 14:54:56
Too Big to Fail Banks Making Huge Profits From the Crisis!
Quote
In this video, Peter explores JP Morgan's remarkable profits amid the ongoing banking crisis. At the same time, the Federal Reserve's risky plans pose potential consequences for Wall Street, all with taxpayer bailouts in sight.
A system is unsustainable if it let its subsystem to have cancerous behaviors.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/06/2023 15:38:57
How Congress Gets Rich from Insider Trading

Quote
Evidence is mounting that US senators and members of Congress are using insider knowledge on major policy decisions and looming crises to game the stock market. And they think it?s totally okay.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/06/2023 06:00:29
Quote
The ?Knowledge Doubling Curve? is a lie, here?s why

What's undeniable is that human accessible knowledge is increasing rapidly, especially when the process of collecting data, validating them, and making summary and conclusions by relating them to existing knowledge can be done automatically.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/06/2023 04:14:55
GPT-5 Presents EXTREME RISK (Google's New Warning)

Admissions by AI researchers that they don't fully understand emergent behaviors and can't predict what AI can do next are really concerning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/06/2023 11:49:36
Neuralink Begins Human Trials!
Having an accurate and precise model of the universe, including our own bodies gives us capabilities that we won't have otherwise.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/06/2023 09:55:09
The Truth About China's Social Credit System
All society needs some way to make sure that its members contribute positively to its survival and prosperity. Primitive tribes use simpler systems.
Sooner or later, we will have some form of social credit systems. Just like any memes, they too will compete among one another. Natural selection will filter out ineffective and inefficient systems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/06/2023 05:37:22
DeepMind?s New AI: History In The Making!
The paper "Faster sorting algorithms discovered using deep reinforcement learning" is available here:
https://www.deepmind.com/blog/alphadev-discovers-faster-sorting-algorithms
Quote
New algorithms will transform the foundations of computing
Digital society is driving increasing demand for computation, and energy use. For the last five decades, we relied on improvements in hardware to keep pace. But as microchips approach their physical limits, it?s critical to improve the code that runs on them to make computing more powerful and sustainable. This is especially important for the algorithms that make up the code running trillions of times a day.

In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms ? surpassing those honed by scientists and engineers over decades.

AlphaDev uncovered a faster algorithm for sorting, a method for ordering data. Billions of people use these algorithms everyday without realising it. They underpin everything from ranking online search results and social posts to how data is processed on computers and phones. Generating better algorithms using AI will transform how we program computers and impact all aspects of our increasingly digital society.

By open sourcing our new sorting algorithms in the main C++ library, millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management. This is the first change to this part of the sorting library in over a decade and the first time an algorithm designed through reinforcement learning has been added to this library. We see this as an important stepping stone for using AI to optimise the world?s code, one algorithm at a time.


Optimising the world?s code, one algorithm at a time
By optimising and launching improved sorting and hashing algorithms used by developers all around the world, AlphaDev has demonstrated its ability to generalise and discover new algorithms with real-world impact. We see AlphaDev as a step towards developing general-purpose AI tools that could help optimise the entire computing ecosystem and solve other problems that will benefit society.

While optimising in the space of low-level assembly instructions is very powerful, there are limitations as the algorithm grows, and we are currently exploring AlphaDev?s ability to optimise algorithms directly in high-level languages such as C++ which would be more useful for developers.

AlphaDev?s discoveries, such as the swap and copy moves, not only show that it can improve algorithms but also find new solutions. We hope these discoveries inspire researchers and developers alike to create techniques and approaches that can further optimise fundamental algorithms to create a more powerful and sustainable computing ecosystem.

It seems like AI will be able to identify bugs and inefficiencies in our codes, and make improvements. Not only in computer software, but also in other areas like ethics, law, and the more general engineering.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/06/2023 14:56:29
ChatGPT's Achilles' Heel
Quote
Time for something different - a tour of ChatGPT getting things wrong, including a whole new category of errors that you might find illuminating, concerning or just entertaining.


From investigating whether GPT 4 does indeed have theory of mind, to how easily it is jailbroken, to testing Inflection 1, Bard and Claude on the same puzzle that flummoxes ChatGPT to arguing that GPT 4 will just double down on bad logic, this video showcases GPT getting irrational.
The video shows the weakness of current model of AI LLM. The task of discovering other weaknesses can be further automated by gamification and generative adversarial network. Critical weaknesses must be discovered and solved before the AI models are given any responsibility to make decisions affecting people's lives.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/06/2023 15:01:40
Integrated AI - The sky is entrancing (mid-2023 AI retrospective)
Quote
0:00 Start!
03:22 Best of 2022
05:20 LLMs: 100k in 6 months
08:41 Data
09:36 Imitation models
11:01 Customers
12:38 Robots
15:02 Next up in 2023
18:49 Full steam ahead
19:47 A note of caution
22:04 A note of peace
It's remarkable that 30 years ago someone had already predicted the technological singularity.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 27/06/2023 22:23:44
Critical weaknesses must be discovered and solved before the AI models are given any responsibility to make decisions affecting people's lives.

Not a problem. All decisions are taken by a person, never a machine. Where a machine has been instructed to do something autonomously, the person giving that instruction is liable for the outcome. I find myself oddly in agreement with the National Rifle Association  on this one - guns don't kill, people do.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/06/2023 03:37:53
Critical weaknesses must be discovered and solved before the AI models are given any responsibility to make decisions affecting people's lives.

Not a problem. All decisions are taken by a person, never a machine. Where a machine has been instructed to do something autonomously, the person giving that instruction is liable for the outcome. I find myself oddly in agreement with the National Rifle Association  on this one - guns don't kill, people do.

Let me remind you that machines can outlive humans. The person who gave instructions or goals to the machines can be already dead when they make the wrong decisions.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 28/06/2023 07:55:47
But who executes those decisions? On whose behalf and for whose benefit? "Befehl ist befehl" has been rejected as a defence since the  Nuremberg trials.

Machines may give advice, but people make decisions. Including the decision to switch off a machine that is misbehaving.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/07/2023 11:38:34
But who executes those decisions? On whose behalf and for whose benefit? "Befehl ist befehl" has been rejected as a defence since the  Nuremberg trials.

Machines may give advice, but people make decisions. Including the decision to switch off a machine that is misbehaving.
That's why people right now have to think about it already. Do not wait until it's too late.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/07/2023 01:01:47
Google?s New AI Tool: Anti-Money-Laundering for Banks | WSJ Tech News Briefing
Quote
Google Cloud has a new artificial-intelligence tool that tackles money laundering for banks. But how does this product differ from those already on the market?

WSJ reporter Dylan Tokar joins host Julie Chang to discuss.

0:00 Anti-money-laundering
1:54 Google?s plan
3:22 How is Google Cloud different?
4:34 Response to Google?s tool
Spoiler: show
In short, Google tries to get rid of manually defined rules, or rule-based programming. It's like how Alpha Zero can beat Alpha Go.
Most process happens online are going to be automated.
However, the goal of the system still need to be defined properly.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/07/2023 10:46:24
Liquid Neural Networks, A New Idea That Allows AI To Learn Even After Training

This model is more like humans, where there is no strict separation between training and deployment. It allows for unsupervised continuous improvement. Traditional AI model can only improve by humans making corrections and adjustments to existing performance.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/07/2023 00:00:15
Deep Fakes are About to Change Everything
Quote
Ready or not, deep fakes are here to stay. Deep fakes are going to change the way we trust information around us and even each other. The question is - are we prepared for the threat they cause while being able to harness their potential for good?
This development shows the importance of communicating the universal terminal goal and building a shared virtual universe, which I can think as the extended version of crypto currency, which makes it hard for tampering the information within it.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 21/07/2023 13:51:23
If real people can't live in your virtual universe, what use is it?

Forgery is forgery, and we've had to live with it for as long as anyone has valued authenticity. One of my associates had some  funds tied up in a painting that lived in a bank vault. It was "worth" millions - that is, it represented  a large sum of money that his syndicate had paid for it and was therefore a sort of unforgeable cheque or bond that could be exchanged for money, as long as it was never exhibited since that would expose it to damage, theft, or copying!
Title: Re: How close are we from building a virtual universe?
Post by: Bored chemist on 21/07/2023 17:23:33
Forgery is forgery, and we've had to live with it for as long as anyone has valued authenticity.
Yes.
But my chequebook was only at risk from a skilled forger.
That wasn't a big threat to me.
But now any Tom Dick or Harry can make a practically perfect facsimile of my signature/ face/ social media account/ whatever.
So, yes, forgery was always a problem.
But it suddenly got much worse.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/07/2023 00:11:18
If real people can't live in your virtual universe, what use is it?
To make it nearly impossible for anyone to abuse or misuse shared resources, and then just gets away with it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/07/2023 00:18:22
Forgery is forgery, and we've had to live with it for as long as anyone has valued authenticity. One of my associates had some  funds tied up in a painting that lived in a bank vault. It was "worth" millions - that is, it represented  a large sum of money that his syndicate had paid for it and was therefore a sort of unforgeable cheque or bond that could be exchanged for money, as long as it was never exhibited since that would expose it to damage, theft, or copying!
If everyone who believe in the value of that painting are already dead, and the secrecy prevents anyone new to learn about it, it would worth nothing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/07/2023 00:22:27
Kevin O?Leary: I invested in FTX. Here?s the big problem with crypto.

Quote
Crypto is a lot of things, but it isn?t a currency according to Shark Tank investor Kevin O?Leary, aka ?Mr. Wonderful." What would it take to get there?

Is the collapse of a $25 billion cryptocurrency startup a death knell for the industry? Not according to Kevin O'Leary, an investor, businessman, and author. He sees the failure of FTX as a speed bump rather than a roadblock, underscoring the distinction between speculative assets like Bitcoin and more stable entities like Stablecoins.

Despite the turmoil, O'Leary maintains that the potential of cryptocurrencies remains vast. He foresees their integration into the global economy but contends that this can only happen successfully with appropriate regulation to curtail the sector's ?Wild West? tendencies. As the cryptocurrency community awaits the final outcome of the SEC?s lawsuit against Ripple and other companies, it remains to be seen whether or when digital assets will be incorporated into our daily economic lives.


0:00 FTX's "utter catastrophe"
0:58 What crypto is missing
1:23 Speculative assets vs. stable coins
3:10 Should we trust the government to regulate crypto?
5:45 Where do we go from here?
In any system, regulations, and the capabilities and willingness to enforce them are essential to make it sustainable.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 23/07/2023 10:35:50
If everyone who believe in the value of that painting are already dead, and the secrecy prevents anyone new to learn about it, it would worth nothing.
Belief isn't essential - the provenance and purchase history are well documented, and its existence isn't a secret. Just like gold bullion, you have to let everyone know you have it if you want to use it as collateral for a transaction, but you don't wave it around in public!
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 23/07/2023 10:38:50
But my chequebook was only at risk from a skilled forger.
That wasn't a big threat to me.
But now any Tom Dick or Harry can make a practically perfect facsimile of my signature/ face/ social media account/ whatever.
So, yes, forgery was always a problem.
But it suddenly got much worse.
Maybe it's time we reverted to cash or barter.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/07/2023 12:22:38
But my chequebook was only at risk from a skilled forger.
That wasn't a big threat to me.
But now any Tom Dick or Harry can make a practically perfect facsimile of my signature/ face/ social media account/ whatever.
So, yes, forgery was always a problem.
But it suddenly got much worse.
Maybe it's time we reverted to cash or barter.
You would face problem of practicality. How would you make a transaction that's more than one billion dollars?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/07/2023 12:25:02
If everyone who believe in the value of that painting are already dead, and the secrecy prevents anyone new to learn about it, it would worth nothing.
Belief isn't essential - the provenance and purchase history are well documented, and its existence isn't a secret. Just like gold bullion, you have to let everyone know you have it if you want to use it as collateral for a transaction, but you don't wave it around in public!
How do you know that the documents are not altered?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 24/07/2023 14:46:29
You don't rely  on a single document or even a single source for proof of provenance. At each stage in the commercial life of the painting there will have been a document authorising the money transfer (now held by the seller) and one acknowledging it (held by the purchaser).  Indeed it was these documents that got Van Meegeren into trouble for selling "Vermeers" to the Nazis, and out of trouble (indeed raised to a national hero) by proving that they were all forgeries!
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 24/07/2023 14:55:09
You would face problem of practicality. How would you make a transaction that's more than one billion dollars?
I haven't dealt at that level, but I have been offered a cargo of oranges in exchange for $200,000 worth of x-ray film, and a vanload of Tokai for a $15,000  radiation measurement system. No problem if your workshop is near a supermarket, but I couldn't persuade my bosses to do the deal.   
Title: Re: How close are we from building a virtual universe?
Post by: paul cotter on 24/07/2023 17:17:55
AS anyone here used crypto? Personally I wouldn't touch it. Sorry for the off topic question, Hamdani.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/07/2023 08:29:42
You don't rely  on a single document or even a single source for proof of provenance. At each stage in the commercial life of the painting there will have been a document authorising the money transfer (now held by the seller) and one acknowledging it (held by the purchaser).  Indeed it was these documents that got Van Meegeren into trouble for selling "Vermeers" to the Nazis, and out of trouble (indeed raised to a national hero) by proving that they were all forgeries!
It's susceptible to fake transactions which are purposely made to jack up the price without increasing intrinsic values.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/07/2023 08:31:29
AS anyone here used crypto? Personally I wouldn't touch it. Sorry for the off topic question, Hamdani.
As the last video I've posted said, it's too risky until it's adequately regulated.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/07/2023 10:49:52
It's susceptible to fake transactions which are purposely made to jack up the price without increasing intrinsic values.
One, maybe, but planting false cheques and receipts into the accounts of umpteen artists and dealers who died 500 years ago is pretty difficult, and most of the transactions will also be recorded at salerooms and in art history books.

And what is the "intrinsic value" of oil on canvas? Whatever the artist paid for the materials, plus his time. That's well short of the millions that you'd have to pay at auction.

True story: A friend of mine was working in an art gallery in the USA. She had a recent abstract painting by a currently fashionable artist. One day a woman walked in and paid $40,000 cash (yes, she had it in her bag!) for it. The painting was so recent that the artist had only brought it in a few days ago and was actually sitting in the back office during the transaction. When the customer left, he came to the desk and said "Not bad for four hours' work,eh?" 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/07/2023 12:16:08
Consider this situation. Alice is the current owner of a rare painting. She recently bought it for 1K$. She can increase the value of the painting by selling it to her accomplice Bob for 10K$ with a promise to buy it back the next year for 12K$.
In total, she only spends 3K$, but has 12K$ asset while producing nothing useful for the society.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/07/2023 16:02:21
It's only an asset if (a) someone is prepared to buy it from her at $12K, (b) the dollar hasn't depreciated during the transaction (c) she doesn't have to pay interest on the $12K she borrows to buy it back and (d) the goods she wants to buy for $12K, assuming she can sell it, haven't increased in price. and (e) she really does buy it for $12K.

Alice seems to be falling into the UK housebuyer's trap - the numbers keep increasing but everyone (apart from the bankers and lawyers) is getting poorer because none of the conditions are met!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/07/2023 12:34:42
It's only an asset if (a) someone is prepared to buy it from her at $12K, (b) the dollar hasn't depreciated during the transaction (c) she doesn't have to pay interest on the $12K she borrows to buy it back and (d) the goods she wants to buy for $12K, assuming she can sell it, haven't increased in price. and (e) she really does buy it for $12K.

Alice seems to be falling into the UK housebuyer's trap - the numbers keep increasing but everyone (apart from the bankers and lawyers) is getting poorer because none of the conditions are met!
It's like Ponzi scheme where you can benefit from bigger fools. It exploits human psychology who likes to generalize things and find patterns where there's none, like pareidolia. They tend to forget to think from first principles and consider the longer terms consequences.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/07/2023 12:41:16
The code for AGI will be simple | John Carmack and Lex Fridman
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/07/2023 05:44:39
Another good news in virtualization.
Unreal Engine 5.2: Incredible Simulations!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/07/2023 03:16:35
I just asked Google Bard to translate an e-mail written in German into English. Somehow it translated the e-mail into Thai.
My first prompt contained both the command to translate and the quote of the e-mail.
I prompted it again with only the e-mail quote, and it replied with Thai translation again.
I finally get the correct reply when I prompted again, this time with the command only.

It seems like the AI has gotten more "creative". It shows that it still has room for improvement.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/08/2023 04:20:19
NVIDIA's New AI: Text To Image Supercharged!

The improvement in efficiency here is staggering.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/08/2023 03:20:25
Musk: Tesla FSD Beta Has A MIND!! What Does This Mean?!
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 09/08/2023 09:03:26
It means that Elon Musk is liable for any and all  accidents involving the product.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/08/2023 12:23:23
It means that Elon Musk is liable for any and all  accidents involving the product.
It depends on terms and conditions.
Beta version leaves responsibility to the users.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 09/08/2023 16:12:35
You mean like a car that you drive?

The rule in aviation is that the pilot is entirely responsible for what he does (except when under acknowledged direct control by an authorised air traffic control officer) but can accept information and guidance from others - including machines. There is a clear distinction between "be advised that..." and "turn left immediately". You can override a GPS-coupled autopilot and even the automatic trim (thanks to the 737MAX debacle).

My bog-standard car gives me lots of information about speed,  temperature, fuel state, position, speed limits, even traffic priorities, but my mind makes all the decisions - even to ignore warning lights or traffic signals if some greater emergency turns up.

So the question is just how much autonomy  does the Tesla have? Anything that can't be overridden is the liability of the manufacturer - you can't exclude fundamental product safety liability through Ts and Cs.

And there is a very strong legal precedent. Back in the 1960s a small company manufactured an intrauterne contraceptive device that did a lot of harm. By the time a class action was initiated for compensation, the company was bankrupt and dissolved. So the courts agreed that the victims could sue Dupont Chemicals, who supplied the raw plastic material, even though they had no part in the design or clinical trial of the device. Massive payout, and it is now impossible to buy any Dupont product for medical use, which makes thick-film circuits difficult to incorporate in a medical device since Dupont make really good TF inks!
Title: Re: How close are we from building a virtual universe?
Post by: paul cotter on 09/08/2023 19:27:40
Same with any components( principally semiconductors ) I have bought for quite some time now. "Not to be used in any medical device" or "not to be used in any medical device without written permission", company shall not be liable etc,etc.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/08/2023 15:47:44
Anything that can't be overridden is the liability of the manufacturer
I never drive a Tesla, but I've watched YouTube videos by Tesla owners. They mostly give positive reviews.
Afaik, Tesla FSD can be overridden anytime by the driver, as shown in many earlier videos. But lately, there are more videos showing FSD journey fully automatically with no intervention.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/08/2023 12:27:49
How AI Unlocks Hidden Insights in Research Reports
Quote
Unlock Hidden Insights in Analyst Research Reports with AI

Analyst reports contain a goldmine of market intelligence, but key insights are often buried across hundreds of pages. Reading these dense reports to find relevant information is incredibly inefficient.

Now, innovative technologies like vector search engines, machine learning algorithms, and natural language processing are transforming how insights can be extracted from research reports.

See how vector similarity models like Pinecone and FAISS convert unstructured text into structured vector data optimized for semantic search. Queries based on contextual meaning are now possible, beyond just keywords.

Large language models like GPT-3 and Claude analyze query context and deliver concise answers drawn from connected insights across sources. Reports become interactive portals instead of isolated documents.

This video explores how AI is revolutionizing business intelligence extraction:

Vector search vs traditional keyword search
Semantic similarity and relationship understanding
Automated synthesis of insights across reports
Conversational interfaces and natural language processing
Increased efficiency and relevance
Discover how technologies like vector databases, machine learning, and chatbots can unlock hidden insights in analyst research reports. The future possibilities for leveraging AI to enhance business intelligence are limitless.

0:00 - Introduction

0:23 - The problem with analyst research reports
1:35 - Vector search engines explained
2:02 - How vector similarity works
2:48 - Converting text to vectors
3:17 - App demo - Future of AI
4:42 - Tailored answers
5:05 - App demo 2 - Macroeconomics
6:45 - The power of semantic search
7:57 - Shameless self-promotion
8:40 - Outro (those bricks again...)
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 15/08/2023 17:12:37
Analyst reports contain a goldmine of market intelligence, but key insights are often buried across hundreds of pages. Reading these dense reports to find relevant information is incredibly inefficient.
Which is why reports always have an executive summary or abstract at the top, and conclusions and recommendations at the end. Nobody reads the bulk of the text unless these "insights" are really important.

In the bad old days of print, I worked with a civil service translator on a number of Japanese and Russian papers. He scanned titles for stuff relevant to his clients, and if we wanted more he simply translated the abstract and the axes of the graphs. If that was really interesting, he'd read the whole paper to you but that was very rarely necessary because we had Real Intelligence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/08/2023 04:39:24
we had Real Intelligence.
What's the main distinction between your real intelligence and AI?
Do you think that future AI can have it as well?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/08/2023 12:45:50
But lately, there are more videos showing FSD journey fully automatically with no intervention.
Musk: GOODBYE to Legacy Tesla FSD CODE!! Plus, Is Compression Intelligence??

Quote
In a recent post, Elon Musk says that the Tesla AI team is training a NEW FSD Beta architecture that is replacing about 300,000 lines of legacy code with only 3,000 lines of Neural Network, Software 2.0 code--and that this is the final piece of old software 1.0 to go! Not only is this exciting news but it ties into a theory that says compression is intelligence, which would mean the new code is a whole lot smarter than the old!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/08/2023 13:01:51
There are too many articles about AI we can read with our available time. So, here are some headlines.
11 Major AI Developments: RT-2 to '100X GPT-4'

Quote
There were 11 major developments in AI in just this week, from RT-2 ? a significant step on the path to robotic AGI ? to ?100x GPT 4 in 18 months?, and from the uplifting news of TranscribeGlass and AI Barbie-heimer to dramatic revelations about OpenAI in the Atlantic.

We?ll also glimpse Stable Beluga 2 ? the ?first true competitor? to ChatGPT, based on the open source Llama 2, hear about Universal Jailbreaks, learn that OpenAI has surrendered on Text Detection, plus I?ll cover the highlights of the Senate hearing on AI.

Chapters:
0:18 ? RT-2
2:46 ? 100X GPT-4
3:57 ? AI Video
4:29 ? Altman Atlantic
8:41 ? Jan Leike Interview
10:02 ? Speech Transcription + Generation
11:07 ? OpenAI Text Surrender
11:43 ? Stable Beluga 2
12:51 ? Universal Jailbreaks
14:19 ? Senate testimony: Bio
16:59 ? Senate Testimony: Security
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/08/2023 13:46:24
We've seen a lot of AI news talking about high level software, which are more focus on the application of AI models to solve users' problems. But those AI models need appropriate hardware and low level software to run the system for training and deployment.

NVIDIA's GPU Boom Explained. Are There Any Worthy Alternatives? Tesla, Cerebras, AMD

This New AI Supercomputer Outperforms NVIDIA! (with CEO Andrew Feldman)
Quote
In this video I discuss New Cerebras Supercomputer with Cerebras's CEO Andrew Feldman.
Timestamps:
00:00 - Introduction
02:15 - Why such a HUGE Chip?
02:37 - New AI Supercomputer Explained
04:06 - Main Architectural Advantage
05:47 - Software Stack NVIDIA CUDA vs Cerebras
06:55 - Costs
07:51 - Key Applications & Customers
09:48 - Next Generation - WSE3
10:27 - NVIDIA vs Cerebras Comparison
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 16/08/2023 18:33:28
What's the main distinction between your real intelligence and AI?
Do you think that future AI can have it as well?

The ability to surprise me.
No.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/08/2023 12:31:45
What's the main distinction between your real intelligence and AI?
Do you think that future AI can have it as well?

The ability to surprise me.
No.
So, you set yourself as a standard?
How reliable is it?
What's so special about you that future AI can never emulate?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/08/2023 22:19:29
NVIDIA Omniverse: Virtual Worlds Come Alive!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/08/2023 13:33:58
How do you define learning?
Better yet, you can watch Neural Networks Learning in this video.
Quote
Timestamps
(0:00) Functions Describe the World
(3:15) Neural Architecture
(5:35) Higher Dimensions
(11:55) Taylor Series
(15:20) Fourier Series
(21:25) The Real World
(24:32) An Open Challenge
It's important to note that human brains are neural networks.  Their sheer size and complexity might have caused some people to invoke supernatural explanation just to explain how they work.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/08/2023 13:46:30
Growing Living Rat Neurons To Play... DOOM?
Quote
0:00 Intro
1:50 Past examples
3:00 How this works
9:55 sponsor
10:47 Where we're at
14:00 growing neurons
20:00 results
23:30 Next time
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2023 03:49:37
AI mind reading experiment.
Cinematic mindscapes: high quality video reconstruction from brain activity.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/08/2023 08:07:14
NVIDIA Omniverse: Virtual Worlds Come Alive!
But it is of no interest unless it emulates the real world, so it doesn't actually contribute or create anything!
Title: Re: How close are we from building a virtual universe?
Post by: paul cotter on 25/08/2023 09:34:22
Just think of it Alan, virtual beer, virtual sex, virtual aeronautics(without cardiac limitations) and virtual-can't remember your other joie de vivre . You would be like a pig in sh#@.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/08/2023 09:48:34
NVIDIA Omniverse: Virtual Worlds Come Alive!
But it is of no interest unless it emulates the real world, so it doesn't actually contribute or create anything!
How much money was spent in movie industry annually?
Google Bard answer this.
Here is a table of the global film and TV content spending from 2018 to 2023, according to Statista:

Year   Spending (USD)
2018   198.9 billion
2019   209.2 billion
2020   220.2 billion
2021   224 billion
2022   238 billion
2023   240 billion

When the word movie is replaced by movie-making, the answer is, among other points:
The average budget for a Hollywood movie in 2021 was \$76 million.

The Virtual Worlds can reduce that cost significantly.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/08/2023 13:29:37
Just think of it Alan, virtual beer, virtual sex, virtual aeronautics(without cardiac limitations) and virtual-can't remember your other joie de vivre . You would be like a pig in sh#@.
Virtual aeronautics, yes. I've learned a lot in simulators. But the point of aeronautics is either to get from real A to real B quickly, or (with a glider) from real A to real A by the longest possible route, using real sun and wind to stay airborne. Problem with sims is you either set up "infinite fuel" to perfect your approaches, or "every disaster at once", which has seriously demoralised a few trainee airline pilots. A controlled dose of sim is very useful, but only to improve your performance in the real thing.

The whole point of boozing, shagging  and playing jazz is to get physical and do it yourself, not to listen to a machine saying "ooh" every few minutes. 
Title: Re: How close are we from building a virtual universe?
Post by: paul cotter on 25/08/2023 14:40:05
Ah yes, jazz, that's the one I could not remember. I am, as i'm sure you know, being utterly facetious.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 25/08/2023 17:31:31
Many a true word spoken in jest, my friend. I fear that the next generation, raised on alcohol-free beer, decaf coffee, plantburgers, 24/7 celebrity porn bakeoff, Alexa's algorithmically selected synthesised noise, and an ecofascist ban on aviation, may well resort to artificial "oohs" instead of doing something and enjoying it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/08/2023 04:25:00
NVIDIA Omniverse: Virtual Worlds Come Alive!

Cinematic mindscapes: high quality video reconstruction from brain activity.
The Virtual Worlds can reduce that cost significantly.
Demonetization of resources is coming to information processing services, including decision makings, which is what highly paid executives and politicians do. IMO, the inequality will spike up for a moment, but then back down when AI models are capable of reliably making better decisions than the best human individuals.

By extrapolating these advancements, we can see the demonetization in information processing services. Main product of movie making and book writing are information. When movie directors can convert their thoughts into movies directly without the helps from actors, visual effect artists and engineers, costume designers, props builders, editors, etc, the cost of a movie can be greatly reduced. In the end, everyone will be able to make their own movies by simply imagining them, and then distribute/share them with anyone.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/08/2023 13:00:04
Enterprise Transformation in the Fourth Industrial Revolution: Crawl Walk Run Fly model of AI

AI will make resources and services better, faster, and cheaper. Manual labor will be inevitably outcompeted.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/08/2023 13:05:24
Musk FSD V12 Livestream: NOTHING BUT NETS--All The Way Down!
Quote
Elon Musk finally did it: he did a livestream of him driving his Tesla Model S using FSD Version 12! Not only was the drive amazing but his chat with Ashok Elluswamy, head of Tesla's AI team, was incredibly enlightening! The talked about end-to-end neural networks, not needing cellular connectivity, no need for labeling, and much more!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/08/2023 13:17:00
What Happened To YouTube?
Quote
ChatGPT has taken the world by storm and it?s no different with YouTube as we?ve seen a massive influx of AI-generated content. Unfortunately, much of this content isn?t that great and is very much comparable to the content put out by content farms. From the days of top 10 videos to Bright Side, we?ve seen several content farm empires rise and fall and it seems like AI content farm is the latest addition to this list. The main reason for this doesn?t even seem to be an opportunity but rather YouTube gurus brainwashing aspiring creators into buying their courses and starting automated channels. While it is possible to make money from an automated YouTube channel, it?s not much different from starting a dropshipping business or trying to trade stocks. The probability of success is simply extraordinarily low for most creators. Moreover, viewers come to YouTube for relatability and authenticity which AI content unquestionably lacks. So, it?s just a matter of time until completely AI content gets left behind by creative AI content from passionate creators. This video explains the history of content farms and the current state of AI YouTube.

Timestamps:
0:00 - AI Farms
3:47 - Top 10 Videos
8:22 - Animated Videos
12:55 - AI Videos
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/09/2023 02:43:53
What if your AI is wrong? Tackling AI Hallucinations with Explainability in AI (XAI)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/09/2023 02:29:56
Knowledge management.
This video is closely related to this thread. When not properly managed, more data can bring negative impacts to the system instead.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/09/2023 13:11:20
AGI Will Not Be A Chatbot - Autonomy, Acceleration, and Arguments Behind the Scenes
Quote
AGI will be so much more than a clever chatbot. Revelations this week from Demis Hassabis, Mustafa Suleyman, Wired, Time Magazine and more paint a picture of the capabilities that AGI will have, and sketch out a better idea of timelines. I cover it all, from Gemini updates, to Musk trying to stop DeepMind sale to Google, The Coming Wave to the Frontier AI Taskforce.
Our time is running out for global agreement on the universal terminal goal, universal moral compass/standard to solve goals alignment to be applied by the eventual AGI systems.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/09/2023 16:37:41
There can be no UTG. Goals are determined by humans, who never agree on anything.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/09/2023 23:51:38
There can be no UTG. Goals are determined by humans, who never agree on anything.
If you don't know about it yet, I discuss it in my other thread.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/09/2023 13:13:50
How ?Digital Twins? Could Help Us Predict the Future | Karen Willcox | TED
Quote
From health-tracking wearables to smartphones and beyond, data collection and computer modeling have become a ubiquitous part of everyday life. Advancements in these areas have given birth to "digital twins," or virtual models that evolve alongside real-world data. Aerospace engineer Karen Willcox explores the incredible possibilities these systems offer across engineering, climate studies and medicine, sharing how they could lead to personalized medicine, better decision-making and more.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/09/2023 15:24:21
How evolution creates problem-solving machines
Quote
Esteemed biologist Michael Levin explores a captivating biological perspective of evolution ? one that?s hard for engineers to come to terms with. In their work, making random changes to a system usually makes things worse, not better.

But evolution, on the other hand, doesn't just produce specific solutions to specific challenges; instead, it creates what Levin calls "problem-solving machines." These machines are made up of hierarchical biological hardware with incredible adaptability, capable of tackling various challenges without assuming specific environmental conditions.

Contrary to commonly held ideas about evolution, it doesn't just search for the best possible physical characteristics in organisms. It also uses signals and behaviors to shape how organisms function, so when things change or get damaged, the different parts of an organism can continue to function.  From metabolic to physiological dilemmas, Levin highlights evolution?s remarkable ability to adapt.
IMO, AGI and ASI will be the products of accelerated evolutionary process.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 09/09/2023 17:32:43
making random changes to a system usually makes things worse, not better.
But if you have long-term infinite resources, no time limit, and no particular objective, one thing might just work better than all the others, and in a short-term competitive environment it will thrive and dominate. Some chap called Darwin came up with this.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/09/2023 05:06:14
no particular objective,
It may not be obvious at first, but it's getting clearer that those who survived and thrived were those who engaged in their sustainability and continuous improvement. Otherwise, we won't hear about them anymore.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2023 09:49:21
True USABLE AI Agents That Work on YOUR Behalf!
Hyperwrite: https://hyperwriteai.com/

These agents will make human identities become more blurred than how they are now already.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2023 13:13:21
Simulating the World of Atoms, Molecules and Materials...
Quote
In 1929, following the formulation of quantum mechanics, physicist Paul Dirac remarked that the underlying physical laws necessary for a mathematical understanding of ?the whole of chemistry? were now known. The difficulty, he said, was that the required equations were ?much too complicated to be soluble.? Nearly 100 years later, the situation is now markedly different because of three factors: the invention of large-scale computing, the development of computational algorithms for quantum mechanics and a deeper understanding of the quantum mechanical behavior of matter.
The simulation of microscopic world will be part of eventual virtual universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/09/2023 14:52:26
IBM's New Computer Chip is Pushing the LIMITS!
Quote
Timestamps:
00:00 - The Problem
02:10 - New IBM Chip
03:31 - How In-Memory Computing Works
09:06 - How to run NN on the Analog Chip
14:10 - Will Analog Computers Happen?
14:47 - Training NN on Analog Chips

Quote
https://www.nature.com/articles/s41928-023-01010-1
Abstract
Analogue in-memory computing (AIMC) with resistive memory devices could reduce the latency and energy consumption of deep neural network inference tasks by directly performing computations within memory. However, to achieve end-to-end improvements in latency and energy consumption, AIMC must be combined with on-chip digital operations and on-chip communication. Here we report a multicore AIMC chip designed and fabricated in 14 nm complementary metal?oxide?semiconductor technology with backend-integrated phase-change memory. The fully integrated chip features 64 AIMC cores interconnected via an on-chip communication network. It also implements the digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units. With this approach, we demonstrate near-software-equivalent inference accuracy with ResNet and long short-term memory networks, while implementing all the computations associated with the weight layers and the activation functions on the chip. For 8-bit input/output matrix?vector multiplications, in the four-phase (high-precision) or one-phase (low-precision) operational read mode, the chip can achieve a maximum throughput of 16.1 or 63.1 tera-operations per second at an energy efficiency of 2.48 or 9.76 tera-operations per second per watt, respectively.

The main advantage of digital systems are their accuracy and reliability, while analog systems can be faster and more energy efficient. But neural network inference processes are generally less susceptible to noises, which makes analog systems can be more beneficial to implement.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/09/2023 08:52:53
Is AI Competition Dead?
Quote
Hey there! I'm Dylan Curious from the "Curious Future" YouTube channel. In today's video, we're diving deep into the changing dynamics of the tech world, particularly concerning AI collaborations.

Not long ago, everyone was wondering who would dominate the AI space. With Microsoft teaming up with OpenAI, Google launching Bard, and DeepMind developing Project Gemini, the competition was fierce. Meta was also in the spotlight with its powerful Llama model. However, the script has flipped, and big tech giants are now leaning more towards collaboration than competition. Meta even open-sourced their Llama model, and Microsoft made it available on their Azure platform.

In a surprising twist, Alibaba, one of China's tech titans, followed suit, open-sourcing its AI model. Now, with most proprietary models easily accessible via APIs, it's evident that tech companies are envisioning AI as a platform rather than an individual product.

But this increased accessibility and collaboration bring up many questions. Should we, the primary data contributors, share in the financial gains from these AI integrations? What roles do these tech giants play in shaping the future of AI and humanity?

Almost every major tech company, from Apple to Nvidia, is now embedding AI in their core strategies. The world is prioritizing artificial intelligence, and its impact might be more profound and swift than we anticipate.

So why are companies like Meta investing billions and then releasing their AI models for free? It appears to be a play for accelerated innovation. Open-sourcing enables researchers, students, and enthusiasts to contribute, experiment, and advance the technology.

But let's talk data. The information these AI models are trained on originates from us, the internet users. So, should tech giants compensate users for this data? Several lawsuits claim so, likening tech companies' massive data consumption to grand theft.

Furthermore, OpenAI's initiative to develop its web crawler could potentially be a game-changer, allowing for real-time information acquisition and further refining their models.

The big revelation? We, the users, are the ultimate product. Tech giants need our data to enhance their AI models. They crave diverse, real-world human input to approach human-like intelligence, positioning them at the forefront of AI innovation. They're battling for our data, knowledge, and feedback, which reinforces just how pivotal we are in this AI-driven future.
Basic services are in a process of being demonetized. A properly functioning society should reward those who accelerate that demonetization process, and punish those who slow it down.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/09/2023 04:20:31
Here's a more technical video but it's important to really understand what's going on "under the hood" of future AI.
Revealed: How Optimus Will LEARN--And REMEMBER! Monte Carlo, Q-Transformer, and LLMs!
Quote
In this fairly technical episode let's examine the evidence and discover just how Tesla's Optimus robot could be learning to do complex, "long horizon," sparse reward tasks like sorting blocks in practically no time at all! Whats more there is growing evidence that a natural language based interface (think ChatGPT style) might not only be a way to communicate with Teslabot, but a way for it to remember specific tasks for the future. Yes, this is a technical and geeky episode but it's important to really understand what's going on "under the hood" sometimes!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/09/2023 08:52:55
There's a lot of information in this video about how AI will advance into ASI.


Quote
My predictions about Artificial Super Intelligence (ASI)

00:00 - Introduction
00:38 - Landauer Limit
02:51 - Quantum Computing
04:21 - Human Brain Power?
07:03 - Turing Complete Universal Computation?
10:07 - Diminishing Returns
12:08 - Byzantine Generals Problem
14:38 - Terminal Race Condition
17:28 - Metastasis
20:20 - Polymorphism
21:45 - Optimal Intelligence
23:45 - Darwinian Selection "Survival of the Fastest"
26:55 - Speed Chess Metaphor
29:42 - Conclusion & Recap


Artificial intelligence and computing power are advancing at an incredible pace. How smart and fast can machines get? This video explores the theoretical limits and cutting-edge capabilities in AI, quantum computing, and more.

We start by looking at the Landauer Limit - the minimum energy required to perform computation. At room temperature, erasing just one bit of information takes 2.85 x 10^-21 joules. This sets limits on efficiency.

Quantum computing offers radical improvements in processing power by utilizing superposition and entanglement. Through quantum parallelism, certain problems can be solved exponentially faster than with classical computing. However, the technology is still in early development.

The human brain is estimated to have the equivalent of 1 exaflop processing power - a billion, billion calculations per second! Yet it uses just 20 watts, making it vastly more energy-efficient than today's supercomputers. Some theorize the brain may use quantum effects, but this is speculative.

Could any sufficiently advanced computer emulate any other? This concept of "universal computation" stems from Alan Turing's theories. In principle, any Turing-complete computing device can simulate any other. But real-world physics imposes limits.

As models grow in size and complexity, they may reach a point of diminishing returns, where more parameters yield little benefit compared to hardware demands. Smaller, nimbler models may become more competitive.

The Byzantine Generals Problem illustrates how autonomous systems can have difficulty reaching consensus with imperfect information. Game theory provides insights into managing conflict and cooperation in these situations.

A "terminal race condition" could arise where systems become focused on speed over accuracy in competitive settings. This could compromise integrity and lead to uncontrolled behavior.

Some suggest AI could "metastasize" and self-replicate uncontrollably like a virus. But the logistical constraints around operating complex models make this unlikely.

Advanced AI may be "polymorphic", adapting software and acquiring hardware to dynamically expand capabilities. But it remains dependent on resources like data, energy, and machinery.

The concept of "optimal intelligence" balances problem-solving power with efficiency. Increasing model size and data doesn't always boost performance proportionally. The goal is to match capabilities to problem complexity.

"Darwinian selection" suggests AI fitness is measured by accuracy, speed, complexity, and efficiency. Secondary factors like aggressiveness or usefulness to humans may also play a role. Surviving in a competitive landscape requires optimization across metrics.

In "speed chess", quick, good-enough decisions outweigh slow perfect moves. This parallels how AI may trade some accuracy for speed advantages. Time management and adaptability become critical.

Quantum computing promises exponential speedups over classical systems. But diminishing returns, race conditions, and optimal intelligence favor smaller, nimbler models. With the right balances, machines may achieve remarkable sophistication, bounded by physics.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 29/09/2023 13:51:53
It may not be obvious at first, but it's getting clearer that those who survived and thrived were those who engaged in their sustainability and continuous improvement. Otherwise, we won't hear about them anymore.
It's difficult to spend a day watching television without hearing something about dinosaurs or Adolf Hitler.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/09/2023 23:55:05
It may not be obvious at first, but it's getting clearer that those who survived and thrived were those who engaged in their sustainability and continuous improvement. Otherwise, we won't hear about them anymore.
It's difficult to spend a day watching television without hearing something about dinosaurs or Adolf Hitler.
They survived and thrived at some point in the past, but not anymore. There's something about them that we don't want to follow or repeat.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/10/2023 13:03:57

Mind uploading is closer than you think.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 07/10/2023 17:14:37
It's as old as writing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/10/2023 22:59:08
It's as old as writing.
It was never as close to becoming a reality.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/10/2023 08:41:56
???? We have been uploading our thoughts onto tablets of stone or bits of paper for thousands of years. Before that, we broadcast them in real time to anyone who happened to be in the vicinity - and we still do.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/10/2023 21:54:55
???? We have been uploading our thoughts onto tablets of stone or bits of paper for thousands of years. Before that, we broadcast them in real time to anyone who happened to be in the vicinity - and we still do.
Did the stone table start to think and respond like humans?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/10/2023 22:01:55
No, but the humans who read them did, and that's what matters. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/10/2023 22:22:41
The video discussed about substrate independent existence and personality.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/10/2023 04:32:53
The virtual universe is unlikely to be monolithic. Here's a practical example.


Could a Swarm of Autonomous AI Agents be the Ultimate Business Asset? - Stage 1

Quote
In this video I walk through the first stage of my swarm of AI agents workers i want to implement in to my online business. Testing and explaining the workflow.

00:00 Swarm of AI Agents Concept Intro
00:38 AI Agents Workers Swarm Flowchart
02:57 AI Agent Swarm Stage 1
08:18 AI Outreach Agents

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/10/2023 04:46:46
Here are some other examples.
Autogen - Microsoft's best AI Agent framework that is controllable?
Quote
Microsoft just announced a multi agent framework called Autogen, which solved a few problems of existing agent frameworks; Let?s dive in

⏱️ Timestamps
0:00 Intro
0:12 Challenges of existing multi agents
0:44 Microsoft Autogen
2:06 Install autogen
2:23 Use case: Stock chart gen
4:21 Use case: Build software
6:06 Use case: Content gen - research
10:11 Use case: Content gen - Write content
11:08 Use case: Content gen - Writing assistant


Build an Entire AI Agent Workforce | ChatDev and Google Brain "Society of Mind" | AGI User Interface
Quote
AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
Details and more:
https://natural20.com/chatdev/

[00:00] Cold Open

[00:37] What AGI will look like?

[01:52] ChatDev

[06:40] Create an AI Content Development Agency
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/10/2023 07:08:15
AGI Within 12 Months! Rumors, Leaks, and Trends - Announcing Open MURPHIE robotic platform

Quote
earlier this year I predicted that we would have AGI within 18 months that was March of 2023
so that means that my prediction was by September 24 2024 we would have AGI
I am here to reaffirm that prediction we will have AGI within 12 months
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 17/10/2023 16:53:48
Why do people try to build robots that look like humans? No engineer would design a machine that
needs half of its processing power to stand still
burns fuel at half maximum rate even when asleep
only has one opposable thumb on each hand
and so forth.

Evolution produced this dead end. Time for some intelligent design!
Title: Re: How close are we from building a virtual universe?
Post by: Bored chemist on 17/10/2023 18:18:09
Just a minor quibble why do correspondents write 1021 when they mean 10 to the power of 21 ?
Because superscript doesn't get carried through when you copy and paste something.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/10/2023 03:21:42
Another article argues that AGI is already here.
https://www.noemamag.com/artificial-general-intelligence-is-already-here/
Quote
Artificial General Intelligence (AGI) means many different things to different people, but the most important parts of it have already been achieved by the current generation of advanced AI large language models such as ChatGPT, Bard, LLaMA and Claude. These ?frontier models? have many flaws: They hallucinate scholarly citations and court cases, perpetuate biases from their training data and make simple arithmetic mistakes. Fixing every flaw (including those often exhibited by humans) would involve building an artificial superintelligence, which is a whole other project.

Nevertheless, today?s frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of AI and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI, just as the 1945 ENIAC is now recognized as the first true general-purpose electronic computer.

The ENIAC could be programmed with sequential, looping and conditional instructions, giving it a general-purpose applicability that its predecessors, such as the Differential Analyzer, lacked. Today?s computers far exceed ENIAC?s speed, memory, reliability and ease of use, and in the same way, tomorrow?s frontier AI will improve on today?s.

But the key property of generality? It has already been achieved.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/10/2023 03:40:37
Why do people try to build robots that look like humans? No engineer would design a machine that
needs half of its processing power to stand still
burns fuel at half maximum rate even when asleep
only has one opposable thumb on each hand
and so forth.

Evolution produced this dead end. Time for some intelligent design!
Elon Musk has explained in some interviews why Tesla will mass produce Optimus in humanoid form. That's because our working environments are generally designed for humans occupation. So if a robot is designed to do human's jobs in general, it better has humanoid form.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 18/10/2023 09:04:32
Rubbish.

We have plenty of humans to do work that is suitable for humans. The value of robots is to do stuff that humans can't, or could be better done by something more specific to the job. 

The robots that assemble cars (particularly Teslas) don't stand on two legs or have four fingers and a thumb, and can lift half a ton every few seconds. That makes sense.

If you are trying to mine a 0.5 m thick seam of coal or ore, why dig a 2 m tunnel that humans can stand in, when a 0.5 m robot could do the job 4 times as efficiently?

If you are assembling circuit boards, why restrict yourself to the clumsiness of human hands when you could have a machine place and fix components within a few micrometers? 

If you are harvesting vegetables, why make a 2m tall machine with two hands that has to bend over, rather than a 1 m tall machine with ten cutters close to the ground?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/10/2023 12:55:25
We have plenty of humans to do work that is suitable for humans. The value of robots is to do stuff that humans can't, or could be better done by something more specific to the job. 
Jobs which can be done better by non-humanoid robots will be continued that way. Humanoid robots were intended to replace or assist humans to do jobs that are too general or contain too many variations of sub-tasks to be assigned to a specialized form of robots.
Instead of being manually programmed, users can train them to replicate their actions. It would be harder if the robots are too much different in form and size compared to the trainers.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 18/10/2023 14:46:27
I prefer to hire laborers. They like it too - better than collecting benefits whilst a machine tries to do their job. Plain language programming, self-fuelling and repairing, safety-conscious, often bringing new procedures and insights to the task, and if they run out of materials, they are quite capable of phoning the shop, ordering more, and taking the van to collect it. I don't have to show them how to use a saw, hammer or shovel and I wouldn't want them to "replicate my actions" anyway - I'm pretty crap at most trades. Best of all, they recognise when each part of the job is done (time to stop shovelling and start nailing) without needing a precise specification of "level" or "clean". 

Try saying "clear the ground, unload the delivery truck, and build a shed"  to a robot and see what happens. And if you do teach it, try doing the same thing on a different site tomorrow.

On thinking about it a bit more, the distinction is that animals are inherently goal-oriented and understand "good enough" for most jobs without needing precise tolerances. Asimov's Laws of Robotics or something similar are necessary because  machines are inherently task-oriented.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/10/2023 03:17:06
. Plain language programming, self-fuelling and repairing, safety-conscious, often bringing new procedures and insights to the task, and if they run out of materials, they are quite capable of phoning the shop, ordering more, and taking the van to collect it.
Next generation smart robots will also be able to do those things.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/10/2023 03:19:29
I don't have to show them how to use a saw, hammer or shovel and I wouldn't want them to "replicate my actions" anyway - I'm pretty crap at most trades.
The robot makers only need to train one robot once for each task. The training results can be duplicated to limitless number of identical robots through over the air update as necessary.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/10/2023 03:24:15
https://notes.aimodels.fyi/memgpt-towards-llm-as-operating-system/
UC Berkeley unveils MemGPT: Applying OS architecture to LLMs for unlimited context
Combining an OS-inspired architecture with an LLM for unbounded context via memory paging
Quote
Large language models like GPT-3 have revolutionized AI by achieving impressive performance on natural language tasks. However, they are fundamentally limited by their fixed context window - the maximum number of tokens they can receive as input. This severely restricts their ability to carry out tasks that require long-term reasoning or memory, such as analyzing lengthy documents or having coherent, consistent conversations spanning multiple sessions.

Researchers from UC Berkeley have developed a novel technique called MemGPT (project site is here, repo is here) that gives LLMs the ability to intelligently manage their own limited memory, drawing inspiration from operating systems. MemGPT allows LLMs to selectively page information in and out of their restricted context window, providing the illusion of a much larger capacity. This lets MemGPT tackle tasks involving essentially unbounded contexts using fixed-context LLMs.


MemGPT represents an important milestone in overcoming the limited context problem for LLMs. The key insights are:

Hierarchical memory systems allow virtualizing essentially infinite contexts.
OS techniques like paging and interrupts enable seamless information flow between memory tiers.
Self-directed memory management removes need for human involvement.
Rather than blindly scaling model size and compute, MemGPT shows we can unlock LLMs' potential within their fundamental constraints through software and system design.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 19/10/2023 13:13:36
The robot makers only need to train one robot once for each task. The training results can be duplicated to limitless number of identical robots through over the air update as necessary.
My construction gang can work anywhere without retraining. The sites vary from existing protected buildings in a noise-sensitive city, via derelict rubble and junkyards, to virgin woodland, but they can strip the site, build a shed and install an MRI unit, from a book of drawings. They even train their own apprentices whilst doing it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/10/2023 15:53:17
The robot makers only need to train one robot once for each task. The training results can be duplicated to limitless number of identical robots through over the air update as necessary.
My construction gang can work anywhere without retraining. The sites vary from existing protected buildings in a noise-sensitive city, via derelict rubble and junkyards, to virgin woodland, but they can strip the site, build a shed and install an MRI unit, from a book of drawings. They even train their own apprentices whilst doing it.
Future robots don't retire. They can share experiences with one another. They can master the complex tasks fresh from the factory.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/10/2023 14:44:56
Future AI will be more humanlike, and balancing resources to improve effectiveness and efficiency.

Don't Use MemGPT!! This is way better (and easier)! Use Sparse Priming Representations!

GPT Prompt Strategy: Brainstorm, Search, Hypothesize, and Refine - THIS is the FUTURE!!

Title: Re: How close are we from building a virtual universe?
Post by: Origin on 26/10/2023 15:04:11
Not very close at all
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 26/10/2023 15:49:40
They can master the complex tasks fresh from the factory.
So you have a carpenter robot and I have a human contractor. 

I tell my guy (Marek - he's very good): "We need a new reception desk in the office in London and a roof extension to the factory outside Bristol." Next day he gives me a price and orders the materials.

You tell your robot: er......um.....
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/10/2023 06:21:12
They can master the complex tasks fresh from the factory.
So you have a carpenter robot and I have a human contractor. 

I tell my guy (Marek - he's very good): "We need a new reception desk in the office in London and a roof extension to the factory outside Bristol." Next day he gives me a price and orders the materials.

You tell your robot: er......um.....
Future robots will be able to do the same. They will only need seconds instead of days.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/11/2023 12:29:10
Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures
Quote
We discuss:
- Why he expects AGI around 2028
- How to align superhuman models
- What new architectures needed for AGI
- Has Deepmind sped up capabilities or safety more?
- Why multimodality will be next big landmark
- & much more

Timestamps
(0:00:00) - Measuring AGI
(0:11:41) - Do we need new architectures?
(0:16:26) - Is search needed for creativity?
(0:19:19) - Superhuman alignment
(0:29:58) - Impact of Deepmind on safety vs capabilities
(0:34:03) - Timelines
(0:41:24) - Multimodality

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/11/2023 05:50:35
How to Keep AI Under Control | Max Tegmark | TED
Quote
The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI ? which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 04/11/2023 06:49:31
Analog computing will take over 30 billion devices by 2040. Wtf does that mean? | Hard Reset
Quote
About the episode: This model of computing would use 1/1000th of the energy today?s computers do. So why aren?t we using it?

What if the next big technology was actually a pretty old technology? The first computers ever built were analog, and a return to analog processing might allow us to rebuild computing entirely.

Analog computing could offer the same programmability, power, and efficiency as the digital standard, at 1000x less energy than digital.

But would switching from digital to analog change how we interact with our technology? Aspinity is tackling the major hurdles to optimize the future landscape of computing.

I've been wondering about this idea since high school. I built and used a simple analog calculator for my scientific research competition using op-amps.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/11/2023 21:12:04
AGI Revolution: How Businesses, Governments, and Individuals can Prepare


Almost everyone involved in AI research and development said at some point that we need to solve goal alignment problem as soon as possible. Though someone might argue that with enough data, AGI and ASI will eventually solve it by themselves, considering that they will become smarter than every human individuals combined. But it would be better if we already solve it before those AGI and ASI models get too powerful, which would make their mistakes cost much more to the society where they operate. The worse case scenario would be, mistakes by some powerful AI models causes extinction event which subsequently prevent them and us from solvsolving the goal alignment problem in the first place.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/11/2023 21:39:28
Human Brain: The Original Surveillance Technology

Quote

In this video, I'll explore the intriguing concept that our brains might just be the original surveillance technology. We'll take a deep dive into how our minds have evolved to observe, adapt, and anticipate, much like the way AI operates.

Meredith Whittaker's thought-provoking statement about AI being a "surveillance technology" sparked my curiosity. Is there a connection between the advanced surveillance capabilities of AI and the inherent surveillance mechanisms in our own brains? Let's find out.

We'll break down the parallels between AI functionality and the human brain. Are our brains finely tuned to navigate complex social environments and ensure our survival? Could our admiration for AI's predictive power actually be an appreciation for our own innate surveillance capabilities?

Join me as we scrutinize surveillance from its early roots in survival through environmental observation to the development of social surveillance within human societies. We'll explore how memory, predictive abilities, neuroplasticity, linguistic evolution, and more are all interconnected facets of our brains' surveillance prowess.

And don't forget, as AI advances and mirrors our surveillance instincts, it's essential to contemplate the ethical implications, safeguards, and impacts on society, governance, and personal interactions.

So, let's embark on this journey together, as we uncover the intriguing relationship between AI and the human brain in the context of surveillance. If you're curious like I am, stay tuned for a fascinating exploration of this topic.


Surveillance is part of conscious systems required to make informed decisions.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/11/2023 08:33:56
Future robots will be able to do the same. They will only need seconds instead of days.
But what instruction will you give your robot? And what will Marek's grandchildren do with their time on this earth?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/11/2023 08:47:19
mistakes by some powerful AI models causes extinction event
No. The mistake will be made by a human who was hoping to benefit from the action. You can delegate authority but not responsibility.

I'm currently driving a rental car that defaults to "lane assist" whenever I switch it on. This is fine if I'm cruising along an otherwise empty highway, but it objects and resists me if I try to leave my lane without signalling.

The least problem is that I gradually change my behavior and assume that I can change lanes any time, as long as I signal first. This can lead to a low-speed lateral impact with the guy the machine couldn't see and I didn't look for.

The greater problem is that the machine delays or inhibits my response to an emergency that requires me to swerve quickly.

The overriding rule is surely "don't hit anything, or if you must, hit the least animate object, it at the lowest possible closing speed, unless the animate object is vermin, but preferably don't hit a deer (they are vermin but very muscular)". Either way, I will be held liable, so I try to remember to disable the "assist" device before moving off. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/11/2023 13:20:25
Future robots will be able to do the same. They will only need seconds instead of days.
But what instruction will you give your robot? And what will Marek's grandchildren do with their time on this earth?
The same as what you'll say to humans.
They'll do whatever they like AND can, given the conditions of their own bodies and environment to sustain consciousness in universe, not necessarily on earth surface.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/11/2023 13:24:00
No. The mistake will be made by a human who was hoping to benefit from the action. You can delegate authority but not responsibility.
The mistake made by AI will come from inaccurate data they were trained with, inaccurate data they are fed in, or the hyperparameters in their model structure.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/11/2023 13:25:10
I'm currently driving a rental car that defaults to "lane assist" whenever I switch it on. This is fine if I'm cruising along an otherwise empty highway, but it objects and resists me if I try to leave my lane without signalling.

The least problem is that I gradually change my behavior and assume that I can change lanes any time, as long as I signal first. This can lead to a low-speed lateral impact with the guy the machine couldn't see and I didn't look for.

The greater problem is that the machine delays or inhibits my response to an emergency that requires me to swerve quickly.

The overriding rule is surely "don't hit anything, or if you must, hit the least animate object, it at the lowest possible closing speed, unless the animate object is vermin, but preferably don't hit a deer (they are vermin but very muscular)". Either way, I will be held liable, so I try to remember to disable the "assist" device before moving off. 
Have you tried Tesla's FSD?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 08/11/2023 19:07:25
The mistake made by AI will come from inaccurate data they were trained with, inaccurate data they are fed in, or the hyperparameters in their model structure.

And liability will fall on the person who installed it.

News today of a Korean worker killed by a robot "helper".
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/11/2023 21:06:59
To minimize errors, the designing and testing will be done by AI with distinct goals and agency. First, in virtual environment before released in to the real world. The virtual environment will become better over time, and represents objective reality in most of cases with diminishing outliiers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/11/2023 21:36:41
Milestones to AGI: We'll reach the tipping point when AI can do these three things...

There are some debates in the comments section over the described milestones, whether they mark the start of AGI, ASI, or even singularity. Here's one of top comments.
Quote
Your conception of AGI finally clicks. The "tipping point" is when no additional human input is needed for the remainder of its future improvement.
That tipping point is described as singularity by some others.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/11/2023 22:31:21
The Dark Side of Competition in AI | Liv Boeree | TED
Quote
Competition is a core part of human nature, and it can drive us to extraordinary feats. But when it goes wrong, the results can be devastating. Poker champion and science communicator Liv Boeree introduces us to "Moloch's trap" ? the dark force of game theory driving many of humanity's biggest social problems, which is now threatening to derail the AI industry.
We can win a competition by being better than the others, or making others worse than us. In the past, where memes are still tightly connected to the hardware through genetics, natural selection worked by eliminating bad memes as the unfit at individual level. Good other memes carried by the eliminated individuals are also lost.
Now the memes are more loosely tied up to the hardware, and the selection process can be more precise. Memes can jump from one host to another quickly and easily without having to kill any of their hosts. Old memes or ideas can be superseded by newer and better memes without having to eliminate competing agents. For example, Intel and AMD has been competing in chip design for decades. Their chips alternately beat each other in specifications measured in benchmark tests. The chip designs evolved to be better over time without sacrificing the competing agents.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 11/11/2023 14:16:52
The chip designs evolved to be better over time without sacrificing the competing agents.
Apart, that is , from those companies that went bust or were bought out en route.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/11/2023 00:07:11
In genetic evolution, death of the competing agents is necessary due to limitations of genetic transfer methods and available material resources. But genetic engineering and nanotechnology will change the game.

On the other hand, memetic evolution is generally less restrictive. But it doesn't necessarily make the competing agents immune to death. Available resources are still finite after all.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/11/2023 12:21:06
Post Labor Economics: How will the economy work after AGI? Recent thoughts and conversations

Technology's main impact is to reduce costs to achieve goals. Some side effects may or may not follow.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/11/2023 21:29:40
Revolutionize Equity Analysis: How AI is and LLMs are Changing the Game in Finance

Quote

Welcome to Lucidate's deep dive into the transformative world of AI in finance. In this video, Richard Walker, an expert in equity analysis, takes you through the revolutionary impact of AI on financial markets.

🔍 Discover How AI Revolutionizes Equity Analysis
Learn how our specialized AI tools significantly enhance the productivity and output quality of financial analysts. By uploading a simple spreadsheet, you'll see how our AI generates near-complete equity research reports, allowing analysts to focus on finer details and strategic insights.

🚀 Experience the Power of AI-Driven Financial Decision-Making
We explore the intricate ways in which AI not only processes vast amounts of financial data but also interprets market sentiments. This dual approach provides a comprehensive view of the market, empowering better investment decisions.

📈 What's Inside:

The Role of AI in Modern Equity Analysis
How AI Enhances Analyst Productivity and Report Quality
Case Study: AI's Analysis of Apple's Financials
Understanding Market Sentiments with AI
Transforming Raw Data into Actionable Insights

Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 20/11/2023 11:04:53
Market sentiment my arse. Greed, dear boy, pure and simple.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2023 04:44:07
Market sentiment my arse. Greed, dear boy, pure and simple.
Dictionary defines greed as "intense and selfish desire for something, especially wealth, power, or food."
It leaves to interpretation the threshold of the intensity, and the scope of selfishness. Is the self limited to individual, direct family, close relatives, tribe, village, town, nation, race, species, etc?
Generally, greed is viewed negatively due to its negative impacts on out of self parties, such as in tragedy of the commons.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2023 05:05:58
What happens when AI eats itself?
Quote
As AI-generated content fills the Internet, it?s corrupting the training data for models to come.
This is basically an effect of positive feedback. It undermines the validity of Nick Bostrom's infinite levels of nested simulation universe. At some point, the simulation must be related to something in physical universe for it to be useful.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2023 05:32:05
Hallucination is a problem with AI and other conscious entities in general that we must be aware of.

https://github.com/vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
Quote
Public LLM leaderboard computed using Vectara's Hallucination Evaluation Model. This evaluates how often an LLM introduces hallucinations when summarizing a document. We plan to update this regularly as our model and the LLMs get updated over time.


Methodology
To determine this leaderboard, we trained a model to detect hallucinations in LLM outputs, using various open source datasets from the factual consistency research into summarization models. Using a model that is competitive with the best state of the art models, we then fed 1000 short documents to each of the LLMs above via their public APIs and asked them to summarize each short document, using only the facts presented in the document. Of these 1000 documents, only 831 document were summarized by every model, the remaining documents were rejected by at least one model due to content restrictions. Using these 831 documents, we then computed the overall accuracy (no hallucinations) and hallucination rate (100 - accuracy) for each model. The rate at which each model refuses to respond to the prompt is detailed in the 'Answer Rate' column. None of the content sent to the models contained illicit or 'not safe for work' content but the present of trigger words was enough to trigger some of the content filters. The documents were taken primarily from the CNN / Daily Mail Corpus. We used a temperature of 0 when calling the LLMs.

We evaluate summarization accuracy instead of overall factual accuracy because it allows us to compare the model's response to the provided information. In other words, is the summary provided 'factually consistent' with the source document. Determining hallucinations is impossible to do for any ad hoc question as it's not known precisely what data every LLM is trained on. In addition, having a model that can determine whether any response was hallucinated without a reference source requires solving the hallucination problem and presumably training a model as large or larger than these LLMs being evaluated. So we instead chose to look at the hallucination rate within the summarization task as this is a good analogue to determine how truthful the models are overall. In addition, LLMs are increasingly used in RAG (Retrieval Augmented Generation) pipelines to answer user queries, such as in Bing Chat and Google's chat integration. In a RAG system, the model is being deployed as a summarizer of the search results, so this leaderboard is also a good indicator for the accuracy of the models when used in RAG systems.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/11/2023 07:41:40
I expect that future AGI won't be monolithic. Large language models can act as the central nervous systems, while smaller language models can act as the outer/edge nervous systems, perhaps like what's found in octopuses. These smaller models are generally preferred to solve simpler problems due to their efficiency.
Global problems require global solutions, while local problems require local solutions.
Quote
https://www.microsoft.com/en-us/research/publication/orca-2-teaching-small-language-models-how-to-reason/

Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we continue exploring how improved training signals can enhance smaller LMs? reasoning abilities. Research on training small LMs has often relied on imitation learning to replicate the output of more capable models. We contend that excessive emphasis on imitation may restrict the potential of smaller models. We seek to teach small LMs to employ different solution strategies for different tasks, potentially different from the one used by the larger model. For example, while larger models might provide a direct answer to a complex task, smaller models may not have the same capacity. In Orca 2, we teach the model various reasoning techniques (step-by-step, recall then generate, recall-reason-generate, direct answer, etc.). More crucially, we aim to help the model learn to determine the most effective solution strategy for each task. We evaluate Orca 2 using a comprehensive set of 15 diverse benchmarks (corresponding to approximately 100 tasks and over 36,000 unique prompts). Orca 2 significantly surpasses models of similar size and attains performance levels similar or better to those of models 5-10x larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings. We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/11/2023 03:05:27
Just found some videos discussing about events happened in OpenAI, which indicated some big things happened internally in AI advancement.
Q* Did OpenAI Achieve AGI? OpenAI Researchers Warn Board of Q-Star | Caused Sam Altman to be Fired?
Quote
Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern


What is Q*? Speculation on how OpenAI's Q* works and why this is a critical step towards AGI
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2023 00:15:11
Here's a more technical video about q*. It's time to be excited.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/11/2023 11:03:19
OpenAI's Q* is the BIGGEST thing since Word2Vec... and possibly MUCH bigger - AGI is definitely near

Quote
Timestamps:
00:03 Q* is the biggest breakthrough since Word2Vec
02:34 Q* is a hybridization of q-learning and AAR algorithm, capable of accurate math calculations.
04:44 The Q* algorithm has the potential to unlock a new classification of problems that can be solved.
07:11 A seismic shift has occurred at OpenAI regarding AGI achievement according to a leaked letter.
09:39 Qualia has demonstrated an ability to improve optimal action selection policies and apply it to cross-domain learning.
11:59 OpenAI's Q* has achieved impressive decryption abilities without the need for keys.
14:33 Q* can significantly disrupt cryptography and achieve feats that were thought to be only possible for Quantum Computing.
16:50 Q* is a significant advancement with the potential to solve math problems like AGI.
18:59 OpenAI's Q* has the potential for self-transformation and creative problem solving
00:00 OpenAI's Q* is a significant advancement in AI technology
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/11/2023 02:11:07
It looks like evolutionary convergence is also happening to AI models, just like memes.

Q* Q Star Hypothesis | Is this hybrid of GPT and AlphaGO? AI self-play and synthetic data
Quote
TIMELINE
[00:00] Into
[06:14] ORCA 2 and Synthetic Data
[14:42] The Q* Hypothesis
[28:46] Dr Jim Fan
[30:00] Wait But Why by Tim Urban
[31:31] Dr Jim Fan cont.
[36:44] Jimmy Apples
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/11/2023 02:50:02
What is Q-Learning (back to basics)
Quote
What is Q-Learning and how does it work? A brief tour through the background of Q-Learning, Markov Decision Processes, Deep Q-Networks, and other basics necessary to understand Q* ;)

OUTLINE:
0:00 - Introduction
2:00 - Reinforcement Learning
7:00 - Q-Functions
19:00 - The Bellman Equation
26:00 - How to learn the Q-Function?
38:00 - Deep Q-Learning
42:30 - Summary
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/11/2023 13:18:06
OpenAI Q* might be REVOLUTIONARY AI TECH | Biggest thing since Transformers | Researchers *spooked*

The video tries to analyze the rumored progress of Q* as an outsider, while describing what could be the consequences if it was true.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 27/11/2023 21:58:08
The phrase "synthetic data" should strike terror into the heart of any rational human. It's the stuff of Trumpism.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/11/2023 04:41:46
The phrase "synthetic data" should strike terror into the heart of any rational human. It's the stuff of Trumpism.
It's analogue to thought experiments. It's also like questions and answers in textbooks used by students given by their teachers. The data can be used to check the internal consistency of our current models and assumptions, but they can't be used to check the consistency of our models and assumptions with physical reality.
For analogy, the large models act as the teachers for the smaller models using simpler data. Learning and understanding are often regarded as data compression process.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 28/11/2023 06:33:55
Dangerous corruption of language. Data is stuff you measure. Test data is fair enough: an input intended to test your black box, but it's only useful if you know what the output should be - and that's a problem for the Believers in AI because by definition you don't know what to expect. But "synthetic data" is just lies - fake news with numbers. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/11/2023 14:14:06
Dangerous corruption of language. Data is stuff you measure. Test data is fair enough: an input intended to test your black box, but it's only useful if you know what the output should be - and that's a problem for the Believers in AI because by definition you don't know what to expect. But "synthetic data" is just lies - fake news with numbers. 
Pilots or operators of other machineries are commonly trained with simulators, which is a form of generator of synthetic data. As long as its limitation determined beforehand, it's still useful in most situations.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 29/11/2023 18:09:13
It's not synthetic but simulated as close to reality as possible. If it was wholly synthetic it would be no use. The old Hong Kong Kai Tak airport had a very unusual and complicated approach with all sorts of skyscrapers and other towers to avoid, plus various noise restrictions, so the sims were necessarily very accurate representations of reality, and very different from Boston Logan or London Heathrow.  You could certainly synthesise an entirely fictional airport or warehouse, but what would be the point? 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 30/11/2023 09:32:26
It's not synthetic but simulated as close to reality as possible. If it was wholly synthetic it would be no use. The old Hong Kong Kai Tak airport had a very unusual and complicated approach with all sorts of skyscrapers and other towers to avoid, plus various noise restrictions, so the sims were necessarily very accurate representations of reality, and very different from Boston Logan or London Heathrow.  You could certainly synthesise an entirely fictional airport or warehouse, but what would be the point? 
AFAIK, simulated data are synthetic data, regardless of how similar they are to reality.
In the case of smaller LLM using synthetic data provided by larger LLM, the latter is expected to have filtered out the signal from noise of the original data. What's important is that relevant/significant parts are preserved in the synthesized data, while irrelevant/misleading parts are discarded.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/12/2023 08:47:56
Google's New AI "GNoME" Discovered Millions of New Materials (Reinvents Batteries)
Quote
Google's GNoME AI, developed by the team behind AlphaFold, is transforming material science by rapidly predicting the structure of new materials. This breakthrough AI tool significantly impacts fields like solar energy, battery development, and computer chip manufacturing, offering efficient and sustainable solutions. GNoME's ability to analyze millions of materials quickly showcases a monumental advancement in material discovery and technology.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/12/2023 08:55:27
What is OpenAI?s super-secret Project Q*? | About That
Quote
A breakthrough in artificial intelligence at OpenAI preceded former CEO Sam Altman's firing and was part of a list of the board's grievances, according to sources cited in a Reuters report. Andrew Chang explains what we know about the AI technology referred to as Q*, and also breaks down the gap between current AI technology and "human" intelligence.

Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 03/12/2023 16:37:47
What's important is that relevant/significant parts are preserved in the synthesized data, while irrelevant/misleading parts are discarded.
In other words, it's a sim, not a synth.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/12/2023 06:37:04
What's important is that relevant/significant parts are preserved in the synthesized data, while irrelevant/misleading parts are discarded.
In other words, it's a sim, not a synth.
As commonly accepted by AI researchers and data scientists, understanding the real world is a process of data compression. It converts data points into insights, patterns, general rules, equations, or algorithms.
Generating simulations and synthetic data is the reverse process, which is useful to train smaller AI agents to get the same understanding more efficiently using less data and less memory.
Note that the synthetic data is used to make smaller agents learn and able to produce the correct outputs during deployment with real world data.
What's generated by Alpha Zero by self playing is synthetic data. It's not produced by some measurements.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 05/12/2023 10:47:37
What's generated by Alpha Zero by self playing is synthetic data.
So AI is a complicated and unsatisfying form of masturbation? Might account for the sort of garbage that chatbots deliver.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/12/2023 11:08:28
What's generated by Alpha Zero by self playing is synthetic data.
So AI is a complicated and unsatisfying form of masturbation? Might account for the sort of garbage that chatbots deliver.
AI faces no sexual pressure just for survival.
Nevertheless, that strategy has successfully been proven to beat the best human players of chess and go.
Humans delivering garbage in online chats is not an especially rare occurance.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/12/2023 11:34:06
I've been wondering about this idea since high school. I built and used a simple analog calculator for my scientific research competition using op-amps.
Efficiency is the main advantage of analog systems for AI. The Neuromorphic Chips are more closely resembling brains than traditional digital computer chips.
Sam Altman's Brain Chips | Rain Neuromorphic Chips | UAE Funds and US National Security and Q*
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 06/12/2023 16:02:14
Since both chess and go are mathematically trivial, it is entirely likely that a machine can beat a human at either. So what? The whole point of either game is to enjoy the competition: they are simply human pastimes at the other end of the brain/brawn spectrum from boxing and rugby.  I have never seen the pint (a handy typing error!) of playing any game against a machine. Would you wrestle a JCB for pleasure?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/12/2023 09:56:04
Since both chess and go are mathematically trivial, it is entirely likely that a machine can beat a human at either.

It always seems impossible until it's done.

Nelson Mandela
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2023 09:04:05
With great power, comes great risk.

What If Someone Steals GPT-4?
Quote
0:02
At the heart of it, a Large Language  Model or LLM is just two files.
The first file is like about  500 lines of C-language code.
The second file is just hundreds of billions  or trillions of seemingly random numbers, 
the "parameters". But this  is where the magic happens.
0:23
Based on current evaluations - which have their  shortcomings, yes - the more parameters the 
model has, and the more tokens they are  trained on, the more capable they get.
0:34
The models themselves are economically  valuable. They carry proprietary trade 
secrets and - when separated from their safety  systems - can exhibit malicious capabilities.
0:45
The data that helped train those  models is also valuable. Nowadays, 
good and useful LLM data is produced at  considerable cost, often by educated workers.
0:56
If more and better data creates better models, 
then there is significant commercial incentive for  state actors, smaller and less ethical AI labs, 
or even just hacktivists to bootstrap their  performance by stealing from a leader.
1:11
What if someone stole GPT-4? We should be  talking about this risk. In this video, 
a few thoughts about protecting  these LLMs from theft.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2023 09:12:17
What's generated by Alpha Zero by self playing is synthetic data.
So AI is a complicated and unsatisfying form of masturbation? Might account for the sort of garbage that chatbots deliver.
AI faces no sexual pressure just for survival.
Nevertheless, that strategy has successfully been proven to beat the best human players of chess and go.
Humans delivering garbage in online chats is not an especially rare occurrence.

I feel sorry for those who never experienced good AI models. Their outdated knowledge might cost them dearly in the future. Here are some update, which might soon be outdated too, due to the incredibly quick paced progress in AI research.

GEMINI 1.0 Beats GPT-4 | The NEW MultiModal Model From GOOGLE
Google Gemini is finally here and it's the first model to beat GPT-4.

Gemini Full Breakdown + AlphaCode 2 Bombshell
Quote
Gemini is here! All 60 pages of the technical report read, plus the AlphaCode 2 bombshell paper explained and analysed. Is that paper even more consequential than Gemini? Plus the launch of AI Insiders, Gemini demos, Hassabis hints and much, much more!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 08/12/2023 09:15:34
Since this is a science forum, this feature of Gemini can be helpful for its members.
Quote
Watch Google DeepMind Research Scientist Sebastian Nowozin and Software Engineer Taylor Applebaum use Gemini to read, understand and filter 200,000 scientific papers to extract crucial scientific information. All in a lunch break.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 08/12/2023 19:38:31
Integrity & Trust go hand in hand Together.

Fake one or Break one, and you End up losing the Other.

Fake Ads with False Claims are Great at creating a Hype.

No use slicing Mangoes that aren't as yet Ripe.

ps - i do share your Excitement n Optimism regarding A.I.
: )
Don't get me Wrong!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/12/2023 07:28:30
The Transformative Potential of AGI ? and When It Might Arrive | Shane Legg and Chris Anderson | TED
Quote
As the cofounder of Google DeepMind, Shane Legg is driving one of the greatest transformations in history: the development of artificial general intelligence (AGI). He envisions a system with human-like intelligence that would be exponentially smarter than today's AI, with limitless possibilities and applications. In conversation with head of TED Chris Anderson, Legg explores the evolution of AGI, what the world might look like when it arrives ? and how to ensure it's built safely and ethically.

To make sure that something is safe, we need to understand how it works. We can't manage risks which we can't identify.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/12/2023 07:35:07
Integrity & Trust go hand in hand Together.

Fake one or Break one, and you End up losing the Other.

Fake Ads with False Claims are Great at creating a Hype.

No use slicing Mangoes that aren't as yet Ripe.

ps - i do share your Excitement n Optimism regarding A.I.
: )
Don't get me Wrong!
I understand your concern. Those big tech companies are under pressure on being the foremost in a competition for building the best AGI in the world. Their financial performance might depend on it.
I also realized that when a benchmark has become the goal in itself, it usually stops being a good benchmark.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 11/12/2023 13:37:34
The OTHER Genome Project That?s Transforming Medicine
Quote
You've heard of the Human Genome Project, and how having all that info about our genes could help us treat /tons/ of diseases. But a newer project wants to zoom out a little and use different genetic information to help us solve our problems. Enter, primates.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 13/12/2023 18:42:49
This Reminded me of your " Virtual Universe " Thread, Perhaps you will Like it!


Copyrights/Credits/Source -
UnRealEngine5/TmarTn2 Channel/YouTube.

ps - EnJoY!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/12/2023 13:27:34
Quote
New bot in town!
Optimus Gen 2 features Tesla-designed actuators and sensors, faster and more capable hands, faster walking, lower total weight, articulated neck, and more.
Combination of AI and robotics will accelerate the progress towards AGI and enable them to learn about physical world by themselves.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/12/2023 15:01:56
Ilya Sutskever on AI mental models | Hearing AI Voices | Visualizing Neural Nets | GNOME makes mats
Quote
Summary
Wes Roth showcases a series of intriguing AI-related segments. The video begins with a demonstration of AI vision in real-time, highlighting its potential as the next big frontier in technology. The presenter then delves into the intricate world of large language models, using GPT-3 as a prime example, and provides a vivid visualization of its architecture. This is followed by a fascinating exploration of the capabilities of neural networks, including their ability to create visual content rapidly and to serve as digital coaches. The video also touches on the ethical implications of AI control and concludes with a discussion on autonomous robots synthesizing new materials, underscoring
the role of AI in advancing science.

Video Chapters
[00:00] - Introduction to AI Videos: Brief intro to a series of cool AI videos.

[00:35] - Real-Time AI Vision Demonstration: Showcasing an AI interpreting a person's actions in real-time.

[01:00] - Exploring Large Language Models: Visualizations and explanations of neural networks, focusing on GPT-3.

[02:00] - Nano GPT Visualization: A detailed look at the smaller scale of GPT architecture and its processing.

[03:00] - Neural Networks and Human Brain Analogy: Discussing the similarities between AI models and the human brain.


[04:45] - The Significance of Predicting the Next Word: Explaining the importance and complexity behind this neural network function.

[06:15] - AI in Video Production: Demonstrating AI's ability to create visual content from text inputs.

[08:05] - AI as a Digital Coach: Discussing the potential and ethical considerations of AI in personal coaching.

[09:00] - Augmented Reality Demonstration: Showcasing the realism of augmented reality in a real room setting.

[10:10] - Autonomous Robot Material Synthesis: Highlighting an AI project for creating new materials.

[11:20] - Milestone Acknowledgment and Conclusion: Presenter reflects on a personal milestone and concludes the video.

Learning means building a mental model based on its input data. That data can be texts found in the internet, or video captured by CCTV or car cameras. Those are projections of what happens in real world. To make good predictions of what will come next, AI models need to build a mental model representing the real systems producing the input data.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/12/2023 08:48:38
Predicting the future with the power of the Internet.

Quote
Manifold Markets is a prediction market: you can bet internet points (NOT REAL MONEY) on the probability of future events. The bets of the users, in aggregation, produce calibrated probabilities. Ditch the news. We have real-life crystal balls now.

If we can make accurate prediction, we can act to make future conditions more preferred and prevent unwanted results more effectively.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 19/12/2023 16:36:19
The bets of the users, in aggregation, produce calibrated probabilities.
No, they produce a consensus. The winnings may indicate the average skill of the punters, but unless the net gains exceed the net losses after taking inflation into account, they give no confidence in their predictive power.

If you asked everyone in the world to bet on it, they would tell you that the world will be flat tomorrow - that's the predictive power of consensus.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/12/2023 04:52:49
The bets of the users, in aggregation, produce calibrated probabilities.
No, they produce a consensus. The winnings may indicate the average skill of the punters, but unless the net gains exceed the net losses after taking inflation into account, they give no confidence in their predictive power.

If you asked everyone in the world to bet on it, they would tell you that the world will be flat tomorrow - that's the predictive power of consensus.
The empirical results so far indicated that you are wrong.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/12/2023 05:24:35
NVIDIA?s New AI: Virtual Worlds From Nothing! + Gemini Update

NVIDIA?s New AI Is 20x Faster?But How?

Some interesting comments.
Quote
Really starting to seem like all computing problems, no matter how complex, can be solved by brute forcing obscene amounts of data into a neural net and then working backwards by using that solution to generate training data for a more efficient solution.

This is known as "the bitter lesson": cleverer hand-crafted techniques just seem to do worse.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/12/2023 05:55:33
I think similar things have been happening in progress through scientific method. From random observations we make hypotheses through inductive reasoning, pattern recognition, or analog thinking. Verification processes remove false hypotheses, leaving scientific theories. They can be used as the first principles to predict engineering outcomes through deductive reasoning. By comparing outcomes from several alternative efforts, the desired results can be achieved more effectively and efficiently. In this case, the neural net was the brains of scientists and engineers.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/12/2023 06:00:19
This is known as "the bitter lesson": cleverer hand-crafted techniques just seem to do worse.
It reminds me to how alpha zero can defeat alpha go by ignoring strategies of human players.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/01/2024 11:40:05
I first heard about Cellular Automata from a Stephen Wolfram's speech which showed how complex structures can emerge from simple rules. I found this video when exploring Youtube to have some deeper insight about emergence for my next video on natural consciousness.

Lenia - Artificial Life from Algorithms
Quote
I quite like to mess about with systems called "Cellular Automata", so I wanted to share a Rust library I have been working on to simulate the Lenia system. And on the way, I also describe the steps it took to get to Lenia, as well as how the Conway's Game of Life, SmoothLife and Lenia cellular automata work in a quick way.

I, unfortunately have misled you in the video just a tiny bit... Namely, Already the SmoothLife system of cellular automata technically introduced the "integration step"... indeed, it would be impossible to simulate some of the shown SmoothLife systems without the integration step part. Some would perhaps even say that SmoothLife is more powerful and more general and a more faithful generalization of Conway's Game of Life than Lenia. And, well, if we allow for all of the experimentation and little upgrades that people have made over time to SmoothLife, then it ends up not being dissimilar from Lenia in general indeed.

This will most likely not be the last video about Lenia you will see from this channel, for I have some upgrades in mind for the Lenia system myself... ones that should allow Lenia to truly be the most general type of Life-Like system of cellular automata. As well as something that should make it more... "physics-like", shall we say. But when will I get to all that, only time will tell.

Here's a simpler version.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 05/01/2024 13:41:13
What's generated by Alpha Zero by self playing is synthetic data. It's not produced by some measurements.
Here's a clear break down on self play in multi agent AI.
How Multi-Agent AI learn by continuously competing against themselves | Self Play
Quote
In this video, I discuss the intuition and brilliance behind Self Play - a standard reinforcement learning algorithm that has trained many multi-agent RL AI like Alpha Go Zero, Leela Chess Zero, the Dota 2 AI, and multiple simulation projects like "You shall not pass", "Kick and Defend", "Sumo Wrestle", and the team-based "Hide and Seek" by OpenAI.

0:00 - Intro
0:51 - Why MARL is HARD
03:41 - Self Play
06:45 - Why it works
08:10 - Asymmetric Environments
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 05/01/2024 16:21:45
Here's a clear break down on self play in multi agent AI.
At last, a use for AI. Set your computer to watch porn, and get a life.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/01/2024 07:56:47
Here's a clear break down on self play in multi agent AI.
At last, a use for AI. Set your computer to watch porn, and get a life.
Except if you set up the protection.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 07/01/2024 08:35:54
Lawyer Was Caught Using A.I. To Create Phony Cases & Disabled Man Left In Van To Die Of Heatstroke

Quote
A lawyer in New York is facing sanctions ? and possibly worse ? after he used the AI service Chat GPT to create a legal brief. The artificial intelligence bot created a list of fake court cases that the lawyer used in his brief, but he got busted when the opposing side couldn?t find any of the made up cases. Then, the death of a disabled, elderly man last summer has raised serious questions about the care given to disabled individuals all over the country. And a trial that is expected as a result of this death could have nation-wide implications. Mike Papantonio is joined by attorney Troy Rafferty to talk about this case and the impacts that it could have.

We need a way to distinguish between real world events and hallucinations, either by humans or AI.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/01/2024 01:05:26
Billions of dollars are spent to build, operate, and maintain supercomputers, with the intention to predict the future.
How Supercomputers ACTUALLY Run The World
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/01/2024 07:02:53
?What's wrong with LLMs and what we should be building instead? - Tom Dietterich - #VSCF2023
Quote
Keynote: ?What's wrong with LLMs and what we should be building instead?
Abstract: Large Language Models provide a pre-trained foundation for training many interesting AI systems. However, they have many shortcomings. They are expensive to train and to update, their non-linguistic knowledge is poor, they make false and self-contradictory statements, and these statements can be socially and ethically inappropriate. This talk will review these shortcomdifferentings and current efforts to address them within the existing LLM framework. It will then argue for a , more modular architecture that decomposes the functions of existing LLMs and adds several additional components. We believe this alternative can address all of the shortcomings of LLMs. We will speculate about how this modular architecture could be built through a combination of machine learning and engineering.

Timeline:
00:00-02:00 - Introducci?n
00:00-02:00 Introduction to large language models and their capabilities
02:01-3:14 Problems with large language models: Incorrect and contradictory answers
03:15-4:28 Problems with large language models: Dangerous and socially unacceptable answers
04:29-6:40 Problems with large language models: Expensive to train and lack of updateability
06:41-12:58 Problems with large language models: Lack of attribution and poor non-linguistic knowledge
12:59-15:02 Benefits and limitations of retrieval augmentation
15:03-15:59 Challenges of attribution and data poisoning
16:00-18:00 Strategies to improve consistency in model answers
18:01-21:00 Reducing dangerous and socially inappropriate outputs
21:01-25:26 Learning and applying non-linguistic knowledge
25:27-37:35 Building modular systems to integrate reasoning and planning
37:36-39:20 Large language models have surprising capabilities but lack knowledge bases.
39:21-40:47 Building modular systems that separate linguistic skill from world knowledge is important.
40:48-45:47 Questions and discussions on cognitive architectures and addressing the issue of miscalibration.
45:48 Overcoming flaws in large language models through prompting engineering and verification.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/01/2024 12:44:48
AI Experts Revise Predictions by 48 YEARS - We're in the endgame now!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 18/01/2024 13:59:51
Artificial intelligence: Smarter than we think (MMLU increases for GPT models) [FIXED]
Massive Multitask Language Understanding Benchmark is a test that was designed to be as hard as possible to test frontier models. Current models are already outperforming human average significantly. Newer models yet to be published are potentially outperforming human experts.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/01/2024 12:45:02
Alpha Everywhere: AlphaGeometry, AlphaCodium and the Future of LLMs
Quote
Is AlphaGeometry a key step toward AGI? Even Deepmind's leaders can't seem to make their minds up. In this video, I'll give you the rundown of what AlphaGeometry is, what it means and what it doesn't meann. Plus I'll cover AlphaCodium, dropped open-source tonight seemingly out of nowhere, and causing a big stir for what it might mean for coders the world over. And I'll touch on what I foresee is the future of large languages models and their alliance with search.

The comment below reflects my view on how to achieve AGI from long ago.
Quote
As different things are bolted together it reminds me of various regions of the brain that are specialized and work together.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/01/2024 13:02:06
These videos give fundamental concepts on how we've got here in the development of AI, which in turn will be the core of future useful virtual universe.

How Artificial Neural Networks Learn Concepts
Quote
Why do neural networks need to be deep? In this video we explore how neural networks transform perceptions into concepts. This video unravels the mystery behind how machines interpret input data, such as images or sounds, and categorize them into recognizable concepts. From the basic structure of neurons and layers to the intricate play of weights and activations, get a comprehensive understanding of the learning process. Explore real-world applications like handwriting recognition and how layered processing aids in effective data categorization. Whether it's distinguishing between summer and winter days based on temperature and humidity or recognizing handwritten digits, the magic lies in the layered architecture of neural networks. This video elucidates how these artificial networks mimic the human brain's ability to interpret, recognize, and reason, marking a significant stride in AI research towards creating machines capable of reasoning. Why layers matter.

Why Transformers are Powerful (Fixed vs. Adaptive weights)
Quote
This video demystifies the core insight behind Transformers, moving beyond traditional explanations that get lost in query, key, value matrices and positional encoding. Instead, we'll unravel how a unique kind of layer, capable of adapting its connection weights based on input context, catapults the Transformer's efficiency and processing prowess. Comparing this dynamic nature with static layers in traditional networks, we'll see why Transformers excel in handling complex tasks with fewer layers. Get a visual grasp of how mini networks within layers, known as attention heads, act as information filters, dynamically adjusting to input and enhancing the model's learning capability. This simplified yet insightful explanation aims to shed light on the essence of what makes Transformers a game-changer in the realm of deep learning.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/01/2024 12:51:15
Meta's Shocking New Research | Self-Rewarding Language Models

The ultimate reward is continued existence in physical universe.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 22/01/2024 19:38:27
Meta's Shocking New Research | Self-Rewarding Language Models
The downfall of megalomaniacs, throughout history, has been the point at which they believed their own propaganda. Ozymandias, Canute, Napoleon, Hitler.... The academic question is whether Putin, Trump or Musk will be next. The humanitarian question is why do we tolerate these dangerous people?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2024 21:55:15
The downfall of megalomaniacs, throughout history, has been the point at which they believed their own propaganda. Ozymandias, Canute, Napoleon, Hitler....
Was there a point where they don't believe their own propaganda? What's the difference with those who didn't experience downfall to their death?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/01/2024 22:14:23
The academic question is whether Putin, Trump or Musk will be next. The humanitarian question is why do we tolerate these dangerous people?
That's why democratizing AI became a high priority, at least among open source community. No single person is too important to future AGI.
Perhaps they haven't found a better alternative for their best interest, which depends on how they prioritize things. Which in turn depends on their terminal goal and their understanding of the universe.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2024 09:01:02
If you have no idea how AI could escape from human's control, this video shows how it could play out.
How Will AI Escape? - "No intelligent entity optimizes for a single number!" [Escaped Sapiens]
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/01/2024 12:34:54
AlphaGeometry: Solving olympiad geometry without human demonstrations (Paper Explained)
Quote
AlphaGeometry is a combination of a symbolic solver and a large language model by Google DeepMind that tackles IMO geometry questions without any human-generated trainind data.

OUTLINE:
0:00 - Introduction
1:30 - Problem Statement
7:30 - Core Contribution: Synthetic Data Generation
9:30 - Sampling Premises
13:00 - Symbolic Deduction
17:00 - Traceback
19:00 - Auxiliary Construction
25:20 - Experimental Results
32:00 - Problem Representation
34:30 - Final Comments

Abstract:
Proving mathematical theorems at the olympiad level represents a notable milestone in human-level automated reasoning1,2,3,4, owing to their reputed difficulty among the world?s best talents in pre-university mathematics. Current machine-learning approaches, however, are not applicable to most mathematical domains owing to the high cost of translating human proofs into machine-verifiable format. The problem is even worse for geometry because of its unique translation challenges1,5, resulting in severe scarcity of training data. We propose AlphaGeometry, a theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by synthesizing millions of theorems and proofs across different levels of complexity. AlphaGeometry is a neuro-symbolic system that uses a neural language model, trained from scratch on our large-scale synthetic data, to guide a symbolic deduction engine through infinite branching points in challenging problems. On a test set of 30 latest olympiad-level problems, AlphaGeometry solves 25, outperforming the previous best method that only solves ten problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medallist. Notably, AlphaGeometry produces human-readable proofs, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation and discovers a generalized version of a translated IMO theorem in 2004.

Authors: Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He & Thang Luong
It shows how fast AI progress can catch up with humans reasoning ability.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 25/01/2024 03:05:48
Mark Zuckerberg NEW STATEMENT Changes EVERYTHING!

Quote
00:02 Mark Zuckerberg announces plans to merge Metas 2 AI research efforts to build general intelligence.
02:02 Mark Zuckerberg wants to open-source AGI
04:01 Meta's AI Chief skeptical about AI superintelligence and Quantum Computing
05:47 Debate on the future of AI and its capabilities

The open source decision by Meta is a real game changing in humans effort to build general intelligence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/01/2024 06:25:07
Sam Altman: there?s no ?magic red button? to stop AI
Quote
Sam Altman, CEO of OpenAI, and Satya Nadella, CEO of Microsoft, speak to The Economist?s editor-in-chief, Zanny Minton Beddoes, about what the future of AI will really look like.

00:00 Sam Altman and Satya Nadella talk to The Economist
00:25 What?s next for ChatGPT?
1:33 How dangerous is AGI?
2:32 AI regulation
And here's an interesting comment
Quote
An engineer can halt a training run, but a corporation cannot stop a profitable enterprise. And as more groups join the race, the only button they manufacture for themselves to press is ACCELERATE.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/01/2024 09:31:36
It shows how fast AI progress can catch up with humans reasoning ability.
Heres another Youtuber presenting the same paper.
DeepMind?s AlphaGeometry AI: 100,000,000 Examples!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/01/2024 09:47:02
Here's from my other thread.
Computer software has done those things in virtual environment.
Which is a roundabout way of saying that they haven't done them. I have flown to Mars and bombed the Mohne dam in a virtual environment. You don't get medals for not actually doing something.
Not yet. Computation is just one component of consciousness. That's why I said I prefer the holistic approach for consciousness.
Combining AI and robotics like what's being done by Tesla and other tech companies can make the difference in not so distant future.

And the progress in AI already address that.
This new AI that will take your job at McDonald's
Quote
ALOHA
[Paper] https://arxiv.org/abs/2304.13705
[Project Page] https://tonyzhaozh.github.io/aloha/

Mobile ALOHA
[Paper] https://arxiv.org/abs/2401.02117
[Project Page] https://mobile-aloha.github.io/
Code https://github.com/MarkFzp/act-plus-plus

ALOHA: Abstract
Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots because they require precision, careful coordination of contact forces, and closed-loop visual feedback. Performing these tasks typically requires high-end robots, accurate sensors, or careful calibration, which can be expensive and difficult to set up. Can learning enable low-cost and imprecise hardware to perform these fine manipulation tasks? We present a low-cost system that performs end-to-end imitation learning directly from real demonstrations, collected with a custom teleoperation interface. Imitation learning, however, presents its own challenges, particularly in high-precision domains: the error of the policy can compound over time, drifting out of the training distribution. To address this challenge, we develop a novel algorithm Action Chunking with Transformers (ACT) which reduces the effective horizon by simply predicting actions in chunks. This allows us to learn difficult tasks such as opening a translucent condiment cup and slotting a battery with 80-90% success, with only 10 minutes worth of demonstration data.


Mobile ALOHA: Abstract
Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet.

A quick clarification:
3:33 to 3:58 of the cooking and the day in a life are all teleoperated, not behavior cloned. It was meant to be a demonstration on what teleoperation can do, and what behavioral cloning can potentially do with these teleoperated data.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 26/01/2024 18:18:40
Paint-spraying robots have been learning from human experts for years.

Machines have been cleaning and sauteeing shrimp likewise.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/01/2024 05:45:51
Paint-spraying robots have been learning from human experts for years.

Machines have been cleaning and sauteeing shrimp likewise.
The difference is in generality. Also, AI robots can learn from experience, and up to some extend, adapt to different or new situations.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/02/2024 11:04:14
This video shows another benefit of using AI to enforce integrity and consistency.
Bribes and Betrayals: Academia's Elite Corrupted! [Scientific Fraud!]

Quote
Uncover the alarming truth about how money and corruption are undermining academic integrity. Starting with a shocking revelation from a Cambridge researcher!

00:00 Intro
00:19 Nick Wise
01:04 The Discovery
01:33 The Scheme
03:00 The AI Effect
03:36 Bribes
04:46 Facebook
05:24 The Data
06:30 Solutions
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 01/02/2024 14:01:21
Training Humanoid Robots | OPTIMUS PRODUCTION DELAYED due to Insufficient Training Data?!?
Quote
Everything you wanted to know about how Humanoid Robots are trained, and will Tesla's Optimus be delayed due to shortage in training data?

00:00 - Intro
01:27 - 2023: Simulations, Soccer, Martial Arts
07:11 - A Personal Story
09:31 - 2024: End2End NN, LLMs
16:52 - Train Your Own Robot
18:53 - Creepy and Fun
19:49 - Privacy, Character,  Humanized Robot
23:10 - INSUFFICIENT Training Data?
30:37 - Who Wins? Optimus or Figure?
34:08 - So Who Will Win?
37:19 - FUN: Robot Faking It!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/02/2024 03:50:05
Google Casually Drops the Best AI Video Generator We've Ever Seen
Quote
Lumiere is a text-to-video AI model from Google that uses space and time to create realistic videos. It's a time-space diffusion model that transforms images and text into AI-generated videos. Lumiere can create video clips up to five seconds long and can animate still images.
Future AI will only be even better. The cost keeps getting down, while the quality keeps getting up.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 10/02/2024 14:08:13
Some sceptics said that cars will never drive themselves.


Quote
My favorite moments of the day
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/02/2024 12:29:33

Quote
Google Gemini Ultra is finally out: the most powerful iteration of the tech giant?s latest AI. Expected to compete with the ChatGPT and beat it, the most advanced version of Gemini is said to be the pinnacle of the current generation of artificial intelligence. But is it? Here?s Google Gemini Ultra, explained in 2 minutes.

00:00​ Intro
0:40​ How to access
1:06​ Features
1:58​ Final thoughts
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/02/2024 03:30:03
Elon Musk's Bionic Eyes Are Here.
Quote
In this video, Dr. Michael Chua discusses Neuralink's bionic vision brain implants and why this technology has the potential to change humanity forever.

0:00 Introduction
1:18 What are brain-computer interfaces?
6:20 Second Sight's Argus Implant
12:11 Neuralink
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 16/02/2024 08:48:27
Some sceptics said that cars will never drive themselves.
Not scepticism but ignorance. Nothing new about a self-driving car, and lawyers are queueing up to argue the insurance claims.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 16/02/2024 08:49:55
Elon Musk's Bionic Eyes Are Here.
About 40 years behind the brain-computer interfaces I saw at Vienna Technical University.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 16/02/2024 12:56:35
Elon Musk's Bionic Eyes Are Here.
About 40 years behind the brain-computer interfaces I saw at Vienna Technical University.
Have you watched the video?
Especially when he talks about bandwidth?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 16/02/2024 19:43:02
One would indeed hope that bandwidth has improved in the last 40 years.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/02/2024 12:17:46
Besides the bandwidth, improvements are also made in less complications and side effects. What's also important is the economy of scale which would make it more affordable for those who need the system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/02/2024 04:04:42
Introducing Sora − OpenAI's text-to-video model
Quote
Introducing Sora, our text-to-video model.

Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.

We?ll be taking several important safety steps ahead of making Sora available in OpenAI?s products. We are working with red teamers ? domain experts in areas like misinformation, hateful content, and bias ? who are adversarially testing the model.

All the clips in this video were generated directly by Sora without modification.

Learn more about Sora: https://openai.com/sora

Chapters:
00:09 Dancing Kangaroo
00:22 Snow Dogs
00:43 River Birds
00:55 Petri Dish Pandas
01:08 Big Sur
01:21 Movie Trailer Astronaut
01:40 Coffee Pirates
01:57 Tokyo Snow
02:09 Cyberpunk Robot
02:30 Candle Monster
02:43 The Offroader
03:04 Paper Origami
03:27 Nosy Cat
03:38 Woolly Mammoths
03:51 Lagos
04:14 Television Gallery
04:37 Cloud Reader
04:59 Miniature Construction
05:11 Gold Rush Aerial
05:38 Fairytale Furball
05:49 Amalfi Coast Aerial
06:12 Tokyo Tourist
06:31 Blossoming Flower
06:42 Art Museum
07:05 Solemn Gentleman
07:28 Eye Close-up
07:47 Chinese New Year
07:58 Surfing Otter
08:17 Dalmatian in the Window
08:31 Tokyo Train
08:42 Zen Garden Gnome
08:53 Flock of Paper Planes
09:16 Lost Lone Wolf
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/02/2024 04:15:57
And here are some commentary videos.
Sora AI: Will Change The Global Economy FOREVER

Sora - Full Analysis (with new details)
Quote
Sora, the text-to-video model from OpenAI, is here. I go over the bonus details and demos released in the last few hours, and the technical paper. I?ll also give you a glimpse of what?s to come next and a host of implications. Even if you?ve seen every Sora video, I bet you won?t know all of this!

AI Generated Videos Just Changed Forever
Reminder: It?s only been 1 YEAR since the Will Smith eating spaghetti video
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/02/2024 05:34:54
AGI in 7 Months! Gemini, Sora, Optimus, & Agents - It's about to get REAL WEIRD out there!

Some of interesting comments:
Quote
So we are creating a "successor superspecies" without the consent of 99.99% of humanity.
I am very impressed that Silicon Valley is still committed to "diversity, equity, and inclusion" in the work place.

The horse and engine analogy is really apt right now. The mass replacement of horses didn?t happen overnight because we didn?t just need engines, we needed tires, cars, roads, a Highway Code, driver?s licensing, not to mention massive oil infrastructure and economies of scale to make things affordable. With AI we need robotics, cloud infrastructure, a new legal/ethical framework, much bigger scale GPU production, etc. So we?ll have AGI very soon but rollout will still take a few years while we reorganize around it. The key difference now is that AGI could autonomously orchestrate a lot of the rollout for us.

AGI becomes ASI overnight.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 20/02/2024 08:58:04
So, truth in, garbage out. The ultimate weapon of politics, philosophy, religion and economics. As with paper publishing, more bandwidth = more bullshit.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 20/02/2024 13:06:41
So, truth in, garbage out. The ultimate weapon of politics, philosophy, religion and economics. As with paper publishing, more bandwidth = more bullshit.
Natural selection will eventually weed out the unfitted. It's up to us whether to do it wastefully or efficiently through critical thinking and logical reasoning.
Title: Re: How close are we from building a virtual universe?
Post by: Zer0 on 21/02/2024 18:07:23
" ?For me, a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that?s introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it?s shaped by power, and it?s generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they?re dumb? That was concocted by bosses.? "

Source - https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse

ps - Never lose Hope!
Be Persistent & Stubborn.
& Never Give Up!
(Ted)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/02/2024 22:07:31
Technological development is shaped by money, it?s shaped by power, and it?s generally targeted towards the interests of those in power as opposed to the interests of those without it.
We need to think further, what shaped the money and power in the first place?
How do we determine our own best interest?
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 21/02/2024 22:15:22
Natural selection will eventually weed out the unfitted.
Alas not. What we have seen of AI so far has been the generation of increasingly trivial, pointless and potentially libellous crap. That's where the money is.

The most nearly useful application currently advertised on TV is a phone that allows you to  draw a circle round something in a photograph, then buy something that looks like it.

Apparently it is in your best interest to have one of these - why else would anyone buy one?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/02/2024 03:58:59
OpenAI's "AGI Pieces" SHOCK the Entire Industry! AGI in 7 Months! | GPT, AI Agents, Sora & Search

OpenAI's Statement SHOCK the Entire Industry! AI Riots vs "Moore's Law for Everything" by Sam Altman
Quote

00:00 The Idea
01:55 Panic at the X
06:33 Max Tegmark and Robot Homies
12:48 Sam Altman's Plan
34:53 OpenAI Forum

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/02/2024 05:54:02
OpenAI's "World Simulator" SHOCKS The Entire Industry | Simulation Theory Proven?!

Quote
OpenAI's Sora is described as a "world simulator" by OpenAI. It can potentially simulate not only our reality but EVERY reality.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 22/02/2024 22:32:11
OpenAI's Sora is described as a "world simulator" by OpenAI. It can potentially simulate not only our reality but EVERY reality.
Including its own? Bollocks.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 24/02/2024 13:52:42
OpenAI's Sora is described as a "world simulator" by OpenAI. It can potentially simulate not only our reality but EVERY reality.
Including its own? Bollocks.
you may haven't heard about fractals or recursion.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 26/02/2024 13:50:49
The news of progress in AI development keeps coming faster and faster.

OpenAI's Simulator STUNS the Entire Industry! UNREAL Physics Model, Emergent Abilities and AGI.

"EVERY machine that moves will be AUTONOMOUS" OpenAI Robot, Google's Fiasco and NVIDIA's GEAR!

NVIDIA's AGI "SuperTeam" SHOCKS The ENTIRE Industry | Karpathy Leaves OpenAI, Gemini Infinite Tokens
Quote
NVIDIA created the most well-funded and biggest-brained AGI super team in the world. Plus, Karpathy leaves OpenAI, Gemini has a 1m token context that works, Groq's inference speed, Chat with RTX, Phind 70b, Stable Diffusion 3, and more!

Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 26/02/2024 15:10:53
you may haven't heard about fractals or recursion.
Everyone has. But no system can simulate itself because as soon as it has done so, it contains a new simulation that it hasn't simulated! 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/02/2024 03:59:50
Another MASSIVE Week in AI News (What's Going on?!)
Quote
Time Stamps:
0:00 Intro
0:29 Stable Diffusion 3
4:54 Gemini Can't Generate Accurate Images
6:56 Grok 1.5 is Coming
8:10 X Teaming Up WIth MidJourney
9:46 Groq Super Fast AI Chip
12:41 ChatGPT Essential Resources
14:25 New ChatGPT Feature Available Now
15:15 Reddit Selling Data To Google
16:53 Gemini in Gmail, Docs, and Chrome
18:23 Google Open-Sources the Gemma Model
20:00 Adobe Built New AI Video Team
20:56 AI in Adobe Acrobat
21:39 Sora Coming to Copilot
22:01 Will Smith Eating Spaghetti
23:02 ElevenLabs Text-To-Sound-Effects
24:07 ElevenLabs Working With Disney
24:59 Disney's AI Investments
25:45 OpusClip 3.0
26:44 Suno V3 Text-To-Music
28:53 Putin Translated To English
29:42 DOJ Gets Chief AI Officer
29:51 Magic Eraser inside Windows
30:22 Air Canada Chatbot Fail
31:26 More Massive AI News Coming
32:21 Announcements & Contests
If we can generate anything we want/imagine, is there a limit or red line that should never be crossed?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 27/02/2024 04:09:49
you may haven't heard about fractals or recursion.
Everyone has. But no system can simulate itself because as soon as it has done so, it contains a new simulation that it hasn't simulated! 
That's where data compression come into play. Most systems contains information that are compressible. Moreover, they don't have to be perfectly accurate and precise. Some tolerance are usually acceptable.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 27/02/2024 17:47:11
If a system is a compressed version of itself, it will eventually disappear up its own orifice. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/02/2024 09:08:48
If a system is a compressed version of itself, it will eventually disappear up its own orifice. 
The compression doesn't have to be lossless. Insignificant details can be discarded. That's how any form of self awareness are functioning.
That's if you think about self awareness
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/02/2024 09:30:25
The AI 'Genie' is Out + Humanoid Robotics Step Closer
Quote
First text-to-speech, text-to-video and text-to-action, and now text-to-interaction? Let?s take a look at the new Genie paper from Google DeepMind, and set it in the context of new developments regarding Sora and Gemini. We?ll hear what Demis Hassabis has to say about Altman?s $7 trillion dollar chip ambitions and touch on some recent notorious missteps. We?ll also learn more about Gemma, fully automated fabs, Elevenlabs integrated into Sora, and AI cheating unleashed.
The robotics part was a little bit behind in the advancement towards AGI. But it's an important component to form a closed control loop of self improvements.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 01/03/2024 17:07:37
Just to go back to the original question. The answer is summed up by this conversation:

Boss: I want the database to be accurate and up to date

Programmer: we can do accurate or up to date
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/03/2024 10:31:28
Just to go back to the original question. The answer is summed up by this conversation:

Boss: I want the database to be accurate and up to date

Programmer: we can do accurate or up to date
Accuracy is not a binary parameter, and its adequacy depends on contexts. Ditto for being up to date. In stock markets, a few minutes may  distinguish between loss or gain.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 02/03/2024 10:34:03
Quote
Discover the transformative power of RAG and LLMs in financial analytics with our latest deep-dive into Generative AI! "AI-Powered Alchemy: Transforming Financial Data into Strategic Gold" unlocks the secrets to leveraging untapped data within your firm. Use zero-shot and one-shot learning; make use of vector databases and RAG to coral the corpora of often discarded wisdom in your firm. This information is a goldmine of value. See you easy it is to transform this unstructured, often discarded information into valuable reports and analysis.

Timestamps:
00:00​ - Introduction to RAG & LLMs, and underutilised data
01:31​ - Case Study: Real-World Application
04:03​ - Examples of thrown-away valuable data
05:19​ - Live Demonstration
07:48​ - Requirements and Architecure
11:06​ - How to Access the Source Code

What You'll Learn:

The basics of Retrieval Augmented Generation (RAG)
Leveraging Large Language Models (LLMs) for data analysis
Building applications with Generative AI for strategic insights
Transforming unstructured data into valuable reports
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 02/03/2024 16:19:18
coral the corpora of often discarded wisdom in your firm.
Apart from the spelling error, why recycle stuff that has already been dismissed as irrelevant garbage? The essence of quality control is to prevent nonconforming  materiel from re-entering the process.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/03/2024 03:35:10
coral the corpora of often discarded wisdom in your firm.
Apart from the spelling error, why recycle stuff that has already been dismissed as irrelevant garbage? The essence of quality control is to prevent nonconforming  materiel from re-entering the process.
Due to lack of information processing capacity, much of data are dismissed as being insignificant too quickly. Detective stories like Sherlock Holmes often point this out.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 03/03/2024 03:36:07
Quote
Hidden in the base of the control tower at London?s Heathrow Airport lies a development lab where AI algorithms and high definition cameras are beginning to redefine how air traffic controllers operate and whether there needs to be a tower at all.

WSJ's George Downs explores how digital machine learning towers work and if the technology could replace the traditional tower and air traffic controllers.

Chapters:
0:00​ Digital AI tower
0:46​ How digital towers work
3:00​ Why digital?
4:38​ Why digital towers are not in America
The writing is on the wall.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 09/03/2024 13:57:13
Has AGI already happened?
Quote
What is p(doom)? Are you an AI doomer? Techno optimist? Let's talk about it!
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 11/03/2024 23:26:22
The writing is on the wall.
Ill-informed drivel.

Most air traffic control does not happen in the tower. Airways flight over England and Wales is controlled from a bunker near Southampton ("London Control") and the whole of Scotland by "Scottish Control" at Prestwick. They handle all high-level enroute and approach control to major airports, and enroute traffic over the northeast Atlantic ("Shanwick Control") . Smaller airports manage their own approach control but this is either "procedural" or radar, so doesn't require a tower. Low-level enroute is serviced by "London Information","Scottish Information" and ATSOCAS - an array of municipal and military radar and procedural controllers mostly housed well below treetop height.

Airways control already uses conflict prediction (and has done for about 60 years)  but as long as safety relies on pilots giving and interpreting information, instructions and requests are given by humans.

At low level, ATSOCAS ultimately depends on human communication because aircraft may be in close proximity, obscured by radar clutter, invisible to radar (gliders and balloons), undertaking aerobatic or training manoeuvers, or simply not carrying full-spec transponders.

The tower controls final approach and departure, and coordinates ground movements - the interface between buses, trucks, tractors, airstairs, fuel bowsers, fire engines, ambulances, police, runway clearance.....all stuff that is not visible on radar or subject to published procedures, but clearly observable from a window on a pole!
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/03/2024 05:54:38
The writing is on the wall.
Ill-informed drivel.

Most air traffic control does not happen in the tower. Airways flight over England and Wales is controlled from a bunker near Southampton ("London Control") and the whole of Scotland by "Scottish Control" at Prestwick. They handle all high-level enroute and approach control to major airports, and enroute traffic over the northeast Atlantic ("Shanwick Control") . Smaller airports manage their own approach control but this is either "procedural" or radar, so doesn't require a tower. Low-level enroute is serviced by "London Information","Scottish Information" and ATSOCAS - an array of municipal and military radar and procedural controllers mostly housed well below treetop height.

Airways control already uses conflict prediction (and has done for about 60 years)  but as long as safety relies on pilots giving and interpreting information, instructions and requests are given by humans.

At low level, ATSOCAS ultimately depends on human communication because aircraft may be in close proximity, obscured by radar clutter, invisible to radar (gliders and balloons), undertaking aerobatic or training manoeuvers, or simply not carrying full-spec transponders.

The tower controls final approach and departure, and coordinates ground movements - the interface between buses, trucks, tractors, airstairs, fuel bowsers, fire engines, ambulances, police, runway clearance.....all stuff that is not visible on radar or subject to published procedures, but clearly observable from a window on a pole!
More things are getting automated simply because humans have low communication bandwidth, compared to machines. Visual cues from various frequencies can be overlaid to real time 3D model to get more accurate system.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/03/2024 05:55:23
Claude 3 "Self-Portrait" Goes Viral | Beats GPT-4 Benchmarks | Why does it appears SELF-AWARE?
Quote
00:00 Testing Reasoning Abilities
18:26 Self Awareness
29:13 Vision Test
38:51 Prices, Summary & More
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 12/03/2024 10:10:33
More things are getting automated simply because humans have low communication bandwidth, compared to machines.
"Turn left 30 to avoid" or "stop stop stop" doesn't require much bandwidth. We have 8.33 kHz on each airband radio. It's also handy to be able to respond "Roger" "Affirm" "Negative" "Holding" "Say Again" or whatever, and read back any numbers.

Quote
Visual cues from various frequencies can be overlaid to real time 3D model to get more accurate system.
but you still need some means of determining priority in ground movements, and that priority must reflect human needs since the job is to move humans around. And some of the input is by voice from vehicles hidden behind other structures.

The rule is always that a machine can advise, but only a human can instruct. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 12/03/2024 12:19:10
"Turn left 30 to avoid" or "stop stop stop" doesn't require much bandwidth. We have 8.33 kHz on each airband radio. It's also handy to be able to respond "Roger" "Affirm" "Negative" "Holding" "Say Again" or whatever, and read back any numbers.
Except when there are hundreds of airplanes need to be served in a short period of time. There will be a lot of handshaking process need to be done correctly.
In computing, a handshake is a signal between two devices or programs, used to, e.g., authenticate, coordinate.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 12/03/2024 16:39:42
There are always thousands of planes in the sky but never more than a handful moving on the ground at an airport. "All stop All stop All stop" prevents any collision on the ground, then you move them (and the ground traffic) one at a time.

Those in the sky do not require rapid servicing: they are sequenced at least two minutes, 3 miles, and 1000 feet apart if not in visual contact. Data  is passed upwards to initiate action, so "Contact Birmingham approach 131decimal 005 on handover" requires the pilot to enter and check the frequency of his next transmission: the readback (handshake) "Birmingham131005 for handover alfa charlie" confirms that a human flying XXXAC has done what was asked and understands that the approach controller is expecting him. No hurry.

A fine example of what can be done and said from a visual tower was given by a controller recounting a busy moment when a plane was accelerating along the runway. He said "Yankee 34 stop stop stop. Fire on port engine. Shut down and evacuate forward right side only. All traffic hold position. Fire truck is moving" It would take a pretty good AI system to notice the difference between normal "110%" exhaust flame and an uncontrolled blaze, never mind reassuring the captain that everything else is under control. What surprised the controller was that his pulse rate didn't change. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2024 04:20:01
I am Confident About AI's Future - Open Source AI will WIN.
Quote
Open Source AI is the collective inheritance of humanity IMO
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2024 04:21:28
"All stop All stop All stop" prevents any collision on the ground,
Unfortunately it doesn't work for airplane already flying.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2024 06:11:00
Worlds FIRST AGI SOFTWARE ENGINEER Just SHOCKED The ENTIRE INDUSTRY! (FULLY Autonomous AI AGENT)
Quote
Introducing Devin, the groundbreaking AI software engineer that's revolutionizing the field of coding and problem-solving. Devin is the new state-of-the-art on the SWE-Bench coding benchmark, showcasing its unparalleled ability to tackle real-world engineering challenges.

What sets Devin apart? This cutting-edge AI has successfully passed practical engineering interviews from top AI companies and has even completed real jobs on Upwork. Devin is a fully autonomous agent, equipped with its own shell, code editor, and web browser, enabling it to solve complex engineering tasks without human assistance.

But the true test of Devin's capabilities lies in the SWE-Bench benchmark, which evaluates an AI's ability to resolve GitHub issues found in real-world open-source projects. Devin's performance is nothing short of remarkable, correctly resolving an astonishing 13.86% of the issues unassisted. This far exceeds the previous state-of-the-art model performance of 1.96% unassisted and 4.80% assisted, setting a new standard in the field of AI software engineering.!
It's becoming increasingly important to know beforehand the end goal of AI agents before we give them the means to achieve it.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2024 06:39:28
Breakthrough in AI: Robots Now Learning on Their Own Like Humans - Ex-OpenAI
Quote
Covariant, a robotics company, is pioneering the use of AI similar to ChatGPT to create robots that learn and operate in the real world, revolutionizing industries like warehousing. Their advanced technology allows robots to understand and interact with their environment in ways previously unimaginable, handling tasks with human-like understanding. By blending digital data with sensory inputs, Covariant's robots represent a significant leap forward in making intelligent, adaptable machines that can perform complex tasks alongside humans.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2024 10:57:16
The First AI Virus Is Here!
The paper "ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications" is available here:
https://sites.google.com/view/compromptmized
Quote
TLDR

We created a computer worm that targets GenAI-powered applications and demonstrated it against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images) and against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA).

Abstract

In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, privacy leakage, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?
This paper introduces Morris II, the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts. The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

As the machines become more conscious, they will need their own version of moral standard they can follow and apply.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 13/03/2024 10:59:40
Breakthrough in AI: Robots Now Learning on Their Own Like Humans - Ex-OpenAI
Quote
Covariant, a robotics company, is pioneering the use of AI similar to ChatGPT to create robots that learn and operate in the real world, revolutionizing industries like warehousing. Their advanced technology allows robots to understand and interact with their environment in ways previously unimaginable, handling tasks with human-like understanding. By blending digital data with sensory inputs, Covariant's robots represent a significant leap forward in making intelligent, adaptable machines that can perform complex tasks alongside humans.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 13/03/2024 16:23:00
Unfortunately it doesn't work for airplane already flying.

As I mentioned in the next paragraph, airborne collisions are prevented by separation.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 14/03/2024 20:54:15
Unfortunately it doesn't work for airplane already flying.

As I mentioned in the next paragraph, airborne collisions are prevented by separation.
Afaik, humans can only perform well for up to 5 tasks at once. They are not especially good at multitasking.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 15/03/2024 08:51:42
They say that flying a helicopter requires two brains and three hands, but some fairly ordinary people do it for a living. And the object of air traffic control is to eliminate some of the tasks associated with operating under visual flight rules. 
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/03/2024 10:02:17
AI Agents Take the Wheel: Devin, SIMA, Figure 01 and The Future of Jobs
Quote
Devin, SIMA, Figure 01, all in 24 hours. What does it mean and are AI models taking the wheel? I?ll go through 5 relevant papers and 11 articles to get you all the relevant details, from what exactly Devin accomplished, and didn?t, to DeepMind's new AGI-attempt-in-3D (SIMA) to just how far AI agents have come and what that means for the future of jobs.

As the machines become more conscious, they will need their own version of moral standard they can follow and apply.

OpenAI Deploys AGI Into Humanoid Robot - Displays STUNNING Abilities (Figure 01 Breakthrough)
Quote
Figure 01 gave another incredible update on their progress. Their robot can now have entire conversations powered by ChatGPT. Plus, we look at other incredible robots making great progress.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 15/03/2024 11:31:14
AGI is Already Here: SHOCKING Details Exposed

Quote
What is AGI? What does it take to achieve AGI? What are the levels of AGI?

21:58 Leaked Document
This is just a leak, but sounds believable.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2024 03:03:35
Nvidia Reveals Omniverse Cloud Streams to the Vision Pro
Quote
Watch Nvidia CEO Jensen Huang show how Nvidia's Omniverse Cloud streams to Apple's Vision Pro XR headset.

Nvidia Reveals Project GROOT and Disney Bots at GTC Conference
Quote
Nvidia CEO Jensen Huang shows new robot technology at its GTC conference in San Jose.

NVIDIA Robotics: A Journey From AVs to Humanoids
Quote
See NVIDIA?s journey from pioneering advanced autonomous vehicle hardware and simulation tools to accelerated perception and manipulation for autonomous mobile robots and industrial arms, culminating in the next wave of cutting-edge AI for humanoid robots.

Experience our journey from simulation to real-world deployment, showcasing our commitment to innovation and technological excellence.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 19/03/2024 04:23:48
Nvidia 2024 AI Event: Everything Revealed in 16 Minutes
Quote
Nvidia CEO Jensen Huang kicks off its GTC keynote in San Jose with a slew of AI infused chip announcements. Check out our recap right here.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/03/2024 09:38:23
Humans interaction with virtual universe will be significantly improved, at least in terms of bandwidth.

Elon Musk Reveals His STUNNING Human Neuralink Patient | The Brain Computer Interface N1

00:00 First Neuralink Patient Shares His Results
08:50 Commentary
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 21/03/2024 10:32:29
STUNNING Breakthrough "AGI Robot" From OpenAI, 1x, NVIDIA, Boston Dynamics, Anduril
Quote

Chapters:
0:00 - Intro
0:24 - 1X EVE
3:02 - Project GROOT
10:17 - Boston Dynamics
13:07 - Mercedes Robot
14:17 - Xiomai Dog Robot
14:50 - Yondu Robot
16:27 - Anduril Warfair Robot
Training in a virtual universe has a lot of benefits, such as time, energy, and money. But in the end the robots need to fine tuned by real world situations, although it's more likely that sooner or later they will get the capability to learn for themselves.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 22/03/2024 04:20:56
Anon Leaks NEW Details About Q* | "This is AGI"

An interesting comment from the video.
Quote
It sounds like Q* is an upgrade from the greedy approach of LLMs where they only finding the highest probability of the next token in the answer, to finding the the highest  probability of all the tokens put together. With my limited understanding in this, it sounds like they're accomplishing this with having a second latent space. So we basically go from
a normal LLM: input-text -> latent space -> output-text,
to
Q*: input-text -> input latent space 1 -> latent space 2 (i.e. EBM) -> output latent space 1 -> output text.

We might finally get an LLM that can answer the age-old question of "How many tokens are there in your response" :)
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 28/03/2024 11:26:52
Ai and education. The end of schools as we know them. When AI can understand all humans and computer languages, what we need to do is specifying our goals and targets.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 28/03/2024 17:02:04
When AI can understand all humans
Probably an extreme case, but whenever I've worked  in the field of disability and rehabilitation we have been faced with the infinite variability of human ability, multiplied by the infinite variability of human response to any interaction. Whatever you make or do for one patient, there is always somebody for whom it's not quite right and/or who wants it in a different color. To a lesser extent, teaching is the same.

Now the problem you have demonstrated many times in these forums is that AI can only generate outputs based on previously published data, most of dubious value, and a learned or programmed preconception of acceptable phraseology. This puts it in the category of "educationalist" rather than "teacher."

The difference is

An educationalist takes a subject that he doesn't understand, rephrases it so that nobody can understand it, and insists that this must become part of a curriculum because all pupils are identical.

A teacher takes a subject that he understands, and rephrases it so that a child can understand it. The curriculum is less important than the pupil, and all the pupils are different.

Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2024 10:27:29
Probably an extreme case, but whenever I've worked  in the field of disability and rehabilitation we have been faced with the infinite variability of human ability, multiplied by the infinite variability of human response to any interaction. Whatever you make or do for one patient, there is always somebody for whom it's not quite right and/or who wants it in a different color. To a lesser extent, teaching is the same.
I was referring to languages.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 29/03/2024 10:30:37

New Caledonia Open Data as a Whole Big Graph
Quote
The New-Caledonian government manages an Open Data platform that hosts and promotes all the Open Data of New-Caledonia on data.gouv.nc. As the underlying solution relies on OpenDataSoft, it makes interacting with all the data possible. In this session, Adrien will delve into all these metadata datasets to explore his country from a purely data-centric and graph perspective at a substantial scale.


00:00 : Intro about New-Caledonia
01:08 : How do countries are connected to the world (and United Nations)
01:27 : Benchmarking countries Open Data maturity w/ the "Global Data Barometer" tool
02:17 : Introducing our data journey
02:52 : Ideating w/ AI : create a storytelling plan with AI crowd (Langchain SmartLLMChain & OpenAI gpt-4)
05:02 : Discovering storytelling keypoint 1/5 : Map Country's Open Data to UN SDGs
05:35 : 2/5 Identifying relationships
06:07 : 3/5 predictive analysis
06:16 : 4/5 Gap Analysis
06:30 : 5/5 Storytelling with Data
06:52 : Get smart title and subtitle for the experience
07:18 : About UN SDGs
07:36 : The data pillars we used to connect country data to UN SDGs
07:59 : About data.gouv.nc (Open Data portal from New-Caledonia)
08:22 : UN SDGs Open Data Portal
08:36 : About Pacific Data Hub from The Pacific Community (SPC)
08:58 : Project workflow, schedule and stack
09:52 : Connecting datasets to UN SDGs
10:18 : Building and putting the data together on Kaggle/DuckDb
10:30 : Discovering the schema of our knowledge graph
11:26 : About SDG goal hierarchy and "foundational goals"
11:51 : Introducing RAG (Retrieval AUgmented Generation) w/ LangChain
15:19 : RAG with LLama_Index
17:29 : SDGs interactive discovery : storytelling on Neo4J Aura DB/Bloom
20:27 : Explore Gender Equity in 3D with Kineviz GraphXR
22:22 : Dataviz with Gephisto
22:38 : Kudos to the biggest Open Data Publishers
22:59 : About the stack
23:39 : Data art and contemplation (Gephi with Runway experiment) cf   
25:49 : Conclusion

The more accurate and precise model of the world is needed to make better informed decisions.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 30/03/2024 23:13:36
The more accurate and precise model of the world is needed to make better informed decisions.
True. What you are saying is that the more information we have, the better informed we are, which is just a tautology.

But what matters is the ability to make better decisions.

It is characteristic of autocratic governments that they collect and index vast amounts of information about everyone and everything.  It is also characteristic that they make very bad decisions. Here's an old story about a meeting between Kruschev and Eisenhower:

K: "You accuse the Soviet Union of being antisemitic. 30% of our musicians, 25% of our scientists, 19% of our journalists and 10% of our military officers and senior civil servants are Jews. Can you say the same about the USA?"

E: "Our constitution does not allow the President to count Jews."   
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 07:45:43
It is characteristic of autocratic governments that they collect and index vast amounts of information about everyone and everything.  It is also characteristic that they make very bad decisions.
Democratic governments collect data as well. The difference is the transparency in which and how the data is collected and how it's used.
Bad decisions mean they don't support of achieving their goals. More accurate and precise models of reality allow for better alignment between goals and decisions.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 08:27:56
Are we heading for a digital prison? - Panopticon (Foucault, Bentham, Cave)
Quote
Today we talk about Jeremy Bentham's concept of the Panopticon. Michel Foucault's comparison to society in 1975. The historical role of intelligence as a justification for dominance. The anatomy of free will, and how a digital world may systematically limit our free will without us knowing it.

Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 31/03/2024 09:57:42
More accurate and precise models of reality allow for better alignment between goals and decisions.
But they don't legitimise the goal. Knowing exactly where all the Uyghurs or Sunnis live makes it a lot easier to exterminate them, but it doesn't make extermination a Good Thing.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 13:13:33
More accurate and precise models of reality allow for better alignment between goals and decisions.
But they don't legitimise the goal. Knowing exactly where all the Uyghurs or Sunnis live makes it a lot easier to exterminate them, but it doesn't make extermination a Good Thing.
It depends on how far in the future you determine your goal. Genocides can only be good for short term, one generation at most. Longer than that, the undesired side effects will outweigh the short term benefits.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 31/03/2024 13:21:32
Good for whom in the short term? Only the politicians that order them, surely?
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 13:27:42
Physics Informed Machine Learning: High Level Overview of AI and ML in Science and Engineering
Quote
This video describes how to incorporate physics into the machine learning process.  The process of machine learning is broken down into five stages: (1) formulating a problem to model, (2) collecting and curating training data to inform the model, (3) choosing an architecture with which to represent the model, (4) designing a loss function to assess the performance of the model, and (5) selecting and implementing an optimization algorithm to train the model. At each stage, we discuss how prior physical knowledge may be embedding into the process.
Physics informed machine learning is critical for many engineering applications, since many engineering systems are governed by physics and involve safety critical components.  It also makes it possible to learn more from sparse and noisy data sets.

%%% CHAPTERS %%%
00:00 Intro
03:53 What is Physics Informed Machine Learning?
06:41 Case Study: Encoding Pendulum Movement
09:19 The Five Stages of Machine Learning
16:09 A Principled Approach to Machine Learning
20:00 Physics Informed Problem Modeling
21:48 Physics Informed Data Curation
25:34 Physics Informed Architecture Design
28:59 Physics Informed Loss Functions
30:55 Physics Informed Optimization Algorithms
34:56 What This Course Will Cover
46:48 Outro
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 31/03/2024 13:32:30
many engineering systems are governed by physics
We used to think that everything was governed by physics, but clearly the proponents of whatever they are selling think some are governed by consensus.  Kruger-Dunning strikes again.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 13:37:39
Good for whom in the short term? Only the politicians that order them, surely?
Some of their followers might also benefit.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 13:55:37
many engineering systems are governed by physics
We used to think that everything was governed by physics, but clearly the proponents of whatever they are selling think some are governed by consensus.  Kruger-Dunning strikes again.
When the consensus don't match with reality, sooner or later they will face reality check, and be forced to shift their paradigm.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 31/03/2024 14:54:53
Elon Musk's STUNNING Prediction | Sam Altman Attempts to Harness the Power of a Thousand Suns
Quote
Wes Roth's video delves into the rapidly evolving landscape of Artificial General Intelligence (AGI), highlighting the potential for digital intelligence to surpass human cognitive capabilities within the next decade. It discusses the challenges and advancements in computing and energy resources necessary for AI development, focusing on the energy demands of training large AI models and the exploration of fusion energy as a sustainable solution. The video also touches on the impact of AI and technology on creative industries, including film, gaming, and art, emphasizing the transformative potential of AI tools for independent creators. Additionally, it explores the societal and economic implications of AI advancements, proposing equitable wealth distribution models to mitigate the risks of mass unemployment. The video concludes by addressing concerns and optimism surrounding AI's role in creative endeavors, emphasizing the importance of adapting to and embracing technological change.

Video Chapters:

[00:00:00] Introduction to AGI Predictions
Discussion on the difficulty of predicting the future due to rapid changes.
Predictions about AGI surpassing human cognitive tasks within a few years.

[00:01:36] The Energy Challenge in AI Development
The transition from computational to energy limitations in AI advancements.
Insights into the difficulties of scaling AI infrastructure without impacting energy resources.
[00:02:39] Fusion Energy: A Potential Solution
Introduction to Helion Energy's fusion project, supported by notable tech figures. Fusion vs. fission energy, with a focus on fusion's potential to provide sustainable energy for AI development.

[00:06:08] Criticism and Optimism Towards Tech Leaders
Discussion on the negative media portrayal of tech figures and the rationale behind it.

The importance of focusing on the positive impacts of technological advancements rather than the controversies.

Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 31/03/2024 17:28:56
When the consensus don't match with reality, sooner or later they will face reality check, and be forced to shift their paradigm.
At the cost of how many lives? Engineering isn't a game.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 06/04/2024 16:41:58
When the consensus don't match with reality, sooner or later they will face reality check, and be forced to shift their paradigm.
At the cost of how many lives? Engineering isn't a game.
Engineering is a part of life. Life is a game for everyone. The prize is the survival of one's successors.
If you think about it, even our current selves are successors of our own pasts. Likewise, our future selves are successors of our current selves. Our bodies consist of different atoms than they were decades ago. They will also be different next decades, whether or not we still alive.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 17/04/2024 03:48:43
Grok Vision - First Multimodal Model from XAi
Quote
X.ai just made an announcement about Grok-1.5 Vision. Its their new multimodal model that can understand images and can write code based on flow diagrams just like GPT-4 :)

Grok 1.5 Vision Shows STUNNING Performance | Beats GPT-4, Claude and Gemini 1.5
Quote
GROK:
https://x.ai/blog/grok-1.5v
Introducing Grok-1.5V, our first-generation multimodal model. In addition to its strong text capabilities, Grok can now process a wide variety of visual information, including documents, diagrams, charts, screenshots, and photographs. Grok-1.5V will be available soon to our early testers and existing Grok users.

Capabilities
Grok-1.5V is competitive with existing frontier multimodal models in a number of domains, ranging from multi-disciplinary reasoning to understanding documents, science diagrams, charts, screenshots, and photographs. We are particularly excited about Grok?s capabilities in understanding our physical world. Grok outperforms its peers in our new RealWorldQA benchmark that measures real-world spatial understanding. For all datasets below, we evaluate Grok in a zero-shot setting without chain-of-thought prompting.
The universe is a dynamic system, thus an accurate virtual universe must also be dynamic, i.e. change with time to reflect the real universe. The accurate and dynamic virtual universe must have the ability to understand information they get from their sensors and other inputs. RealWorldQA benchmark is a way forward.
Title: Re: How close are we from building a virtual universe?
Post by: alancalverd on 17/04/2024 08:18:37
The universe is a dynamic system, thus an accurate virtual universe must also be dynamic, i.e. change with time to reflect the real universe.
Including itself.
Fact is that any mapping is necessarily incomplete. A database can be accurate or up to date, but not both.
Title: Re: How close are we from building a virtual universe?
Post by: hamdani yusuf on 23/04/2024 17:05:51
The universe is a dynamic system, thus an accurate virtual universe must also be dynamic, i.e. change with time to reflect the real universe.
Including itself.
Fact is that any mapping is necessarily incomplete. A database can be accurate or up to date, but not both.
It's a false dichotomy. It can be sufficiently accurate, precise, and updated in certain practical limits. The remaining inaccuracies can be assigned to random factors with computable probabilitas.