0 Members and 1 Guest are viewing this topic.
They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have.
Nearly 8,000 miles from Osama bin Laden's lair, Navy Seal Team Six trained in a mock-up of the compound at a North Carolina Defense Department facility.
Among unicellular organisms, CRISPR system as defense mechanism can be seen as an outstanding example of virtual universe. They memorize genetic code of invading virus in the form of DNA too, which is perhaps the only long term data storage that they have. At a glance, it may look costly. But it turns out that the benefits outweigh the costs.
AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding.In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like “create a webpage with a menu on the side and title at the top,” and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible.“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”OpenAI used an earlier version of Codex to build a tool called Copilot for GitHub, a code repository owned by Microsoft, which is itself a close partner of OpenAI. Copilot is similar to the autocomplete tools found in Gmail, offering suggestions on how to finish lines of code as users type them out. OpenAI’s new version of Codex, though, is much more advanced and flexible, not just completing code, but creating it.Codex is built on the top of GPT-3, OpenAI’s language generation model, which was trained on a sizable chunk of the internet, and as a result can generate and parse the written word in impressive ways. One application users found for GPT-3 was generating code, but Codex improves upon its predecessors’ abilities and is trained specifically on open-source code repositories scraped from the web.
“Sometimes it doesn’t quite know exactly what you’re asking,” laughs Brockman. He has a few more tries, then comes up with a command that works without this unwanted change. “So you had to think a little about what’s going on but not super deeply,” he says.This is fine in our little demo, but it says a lot about the limitations of this sort of program. It’s not a magic genie that can read your brain, turning every command into flawless code — nor does OpenAI claim it is. Instead, it requires thought and a little trial and error to use. Codex won’t turn non-coders into expert programmers overnight, but it’s certainly much more accessible than any other programming language out there.OpenAI is bullish about the potential of Codex to change programming and computing more generally. Brockman says it could help solve the programmer shortage in the US, while Zaremba sees it as the next step in the historical evolution of coding.“What is happening with Codex has happened before a few times,” he says. In the early days of computing, programming was done by creating physical punch cards that had to be fed into machines, then people invented the first programming languages and began to refine these. “These programming languages, they started to resemble English, using vocabulary like ‘print’ or ‘exit’ and so more people became able to program.” The next part of this trajectory is doing away with specialized coding languages altogether and replacing it with English language commands.“Each of these stages represents programming languages becoming more high level,” says Zaremba. “And we think Codex is bringing computers closer to humans, letting them speak English rather than machine code.” Codex itself can speak more than a dozen coding languages, including JavaScript, Go, Perl, PHP, Ruby, Swift, and TypeScript. It’s most proficient, though, in Python.
In this century, humanity is predicted to undergo a transformative experience, the likes of which have not been seen since we first began to speak, fashion tools, and plant crops. This experience goes by various names - "Intelligence Explosion," "Accelerando," "Technological Singularity" - but they all have one thing in common.They all come down to the hypothesis that accelerating change, technological progress, and knowledge will radically change humanity. In its various forms, this theory cites concepts like the iterative nature of technology, advances in computing, and historical instances where major innovations led to explosive growth in human societies.Many proponents believe that this "explosion" or "acceleration" will take place sometime during the 21st century. While the specifics are subject to debate, there is general consensus among proponents that it will come down to developments in the fields of computing and artificial intelligence (AI), robotics, nanotechnology, and biotechnology.In addition, there are differences in opinion as to how it will take place, whether it will be the result of ever-accelerating change, a runaway acceleration triggered by self-replicating and self-upgrading machines, an "intelligence explosion" caused by the birth of an advanced and independent AI, or the result of biotechnological augmentation and enhancement.Opinions also differ on whether or not this will be felt as a sudden switch-like event or a gradual process spread out over time which might not have a definable beginning or inflection point. But either way, it is agreed that once the Singularity does occur, life will never be the same again. In this respect, the term "singularity" - which is usually used in the context of black holes - is quite apt because it too has an event horizon, a point in time where our capacity to understand its implications breaks down.
The use of the term "singularity" in this context first appeared in an article written by Stanislav Ulam about the life and accomplishments of John von Neumann. In the course of recounting opinions his friend held, Ulam described how the two talked at one point about accelerating change:"One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."However, the idea that humanity may one day achieve an "intelligence explosion" has some precedent that predates Ulam's description. Mahendra Prasad of UC Berkeley, for example, credits 18th-century mathematician Nicolas de Condorcet with making the first recorded prediction, as well as creating the first model for it.In his essay, Sketch for a Historical Picture of the Progress of the Human Mind: Tenth Epoch (1794), de Condorcet expressed how knowledge acquisition, technological development, and human moral progress were subject to acceleration:"How much greater would be the certainty, how much more vast the scheme of our hopes if... these natural [human] faculties themselves and this [human body] organization could also be improved?... The improvement of medical practice... will become more efficacious with the progress of reason..."[W]e are bound to believe that the average length of human life will forever increase... May we not extend [our] hopes [of perfectibility] to the intellectual and moral faculties?... Is it not probable that education, in perfecting these qualities, will at the same time influence, modify, and perfect the [physical] organization?"
Huffman Codes are one of the most important discoveries in the field of data compression. When you first see them, they almost feel obvious in hindsight, mainly due to how simple and elegant the algorithm ends up being. But there's an underlying story of how they were discovered by Huffman and how he built the idea from early ideas in information theory that is often missed. This video is all about how information theory inspired the first algorithms in data compression, which later provided the groundwork for Huffman's landmark discovery.0:00 Intro2:02 Modeling Data Compression Problems6:20 Measuring Information8:14 Self-Information and Entropy11:03 The Connection between Entropy and Compression16:47 Shannon-Fano Coding19:52 Huffman's Improvement24:10 Huffman Coding Examples26:10 Huffman Coding Implementation27:08 Recap
In recent years Knowledge Graphs have been used to solve one of the biggest problems not only in machine learning but also in computer science in general: how to represent knowledge.“Knowledge representation and reasoning is the area of Artificial Intelligence (AI) concerned with how knowledge can be represented symbolically and manipulated in an automated way by reasoning programs. More informally, it is the part of AI that is concerned with thinking, and how thinking contributes to intelligent behavior.” [Brachman and Levesque, 2004]This aspect is critical since any “agent” — human, animal, electronic, mechanical, to behave intelligently, requires knowledge. Think about us as humans, for a very wide range of activities, we make decisions based on what we effortlessly and unconsciously know (or believe) about the world. Our [intelligent] behaviour is clearly conditioned, if not dominated, by knowledge.Knowledge representation and reasoning focuses on the knowledge, not the knower. In this context, a graph based representation is becoming one of the most prominent approaches, thanks to its flexibility of representing concepts and relationships amongst them in a simple and generic data structure.
What is a Knowledge Graph?For this question there is no gold standard, universally accepted definition, but my favorite is the one given by Gomez-Perez et al. [Gomez-Perez et al., 2020]:“A knowledge graph consists of a set of interconnected typed entities and their attributes.”According to this definition, the basic unit of a Knowledge Graph is the representation of an entity, such as a person, organization, or location, or perhaps a sporting event or a book or movie. Each entity might have various attributes. For a person, those attributes would include the name, address, birth date, and so on. Entities are connected to each other by relations: for example, a person works for a company, and a user likes a page or follows another user. Relations can also be used to bridge two separate Knowledge Graphs [Negro, 2021].
ConclusionThis blog post formally demonstrates how Knowledge Graphs are concretely capable of representing the knowledge available in multiple domains not only in a way that facilitates, at first glance, its exploration and navigation for analysts. The inherent structures and the forces that drive the connection among the entities in the graph coming from the related domain (in our example the biological rules) can be captured and analyzed also by artificial and autonomous agents. The classification represented here is just an example of how machine learning algorithms can be properly fed by graph in such a manner that would be impossible or very hard otherwise. In order to obtain the same accuracy we would have had to collect many common features related to each of the entities we wanted to classify.It is worth noting here that this effort doesn’t go in the direction of replacing the human capability to analyze this knowledge but it is an empowerment. The capability of processing enormous amounts of data goes beyond human possibilities. Nevertheless, this is why machine learning has been introduced. In any case at the end of these processes, it is always human responsibility to evaluate the insights and, more in general, the results of this analysis and to make informed and wiser decisions based on them.
Human induced pluripotent stem cells (iPSCs) can be used to generate brain organoids containing an eye structure called the optic cup, according to a study published on August 17, 2021, in the journal Cell Stem Cell. The organoids spontaneously developed bilaterally symmetric optic cups from the front of the brain-like region, demonstrating the intrinsic self-patterning ability of iPSCs in a highly complex biological process.“Our work highlights the remarkable ability of brain organoids to generate primitive sensory structures that are light sensitive and harbor cell types similar to those found in the body,” says senior study author Jay Gopalakrishnan of University Hospital Düsseldorf. “These organoids can help to study brain-eye interactions during embryo development, model congenital retinal disorders, and generate patient-specific retinal cell types for personalized drug testing and transplantation therapies.”
DeepMind’s Idea to Build Neural Networks that can Replay Past Experiences Just Like Humans DoDeepMind researchers created a model to be able to replay past experiences in a way that simulate the mechanisms in the hippocampus.
For decades, constant innovation in the world of semiconductor chip design has made processors faster, more efficient, and easier to produce. Artificial intelligence (A.I.) is leading the next wave of innovation, trimming the chip design process from years to months by making it fully autonomous.Google, Nvidia, and others have showcased specialized chips designed by A.I., and electronic design automation (EDA) companies have already leveraged A.I. to speed up chip design. Software company Synopsys has a broader vision: Chips designed by A.I. from start to finish.
On August 18, digital payments giant Visa spent $150,000 to buy a unique work of art, and in so doing quietly took its first step into the metaverse, a nascent online world that promises to transform the internet into a virtual reality. Instead of canvas or marble, the pixelated artwork, named CryptoPunk 7610, is what’s known as a non-fungible token (NFT), a unique digital asset which, similarly to bitcoin, certifies the authenticity, ownership and provenance of any digital object written to a blockchain. One of the 10,000 24x24 pixel images of the CryptoPunk collection, generated algorithmically, Visa’s first NFT is an avatar of a female character, distinguishable by a mohawk, large green eyes and bright red lipstick. However, the company didn’t actually custody the 49.5 ETH, paid for the token, or the asset itself. Instead, newly licensed bank, Anchorage, has helped facilitate the deal, and importantly became the first known U.S. bank to custody one of these novel assets.
The great thing about the future is you can make it up. If present day reality is messier than you had hoped, then you can construct an alternative one, where everything is much cleaner. So it is with the latest West Coast infatuation with the metaverse. Now that the Federal Trade Commission is hammering on Big Tech’s door and even the Taliban is using audio app Clubhouse, maybe it is time to add a shiny new dimension to the future. The term metaverse comes from Snow Crash, a 1992 science fiction novel by Neal Stephenson, in which human avatars and software daemons inhabit a parallel 3D universe. The term now has a life of its own and has cropped up recently in chief executive presentations from Microsoft’s Satya Nadella and Facebook’s Mark Zuckerberg.
According to Boston Dynamics, Atlas uses “perception” to navigate the world. The company’s website states that Atlas uses “depth sensors to generate point clouds of the environment and detect its surroundings.” This is similar to the technology used in self-driving cars to detect roads, objects, and people in their surroundings.This is another shortcut that the AI community has been taking. Human vision doesn’t rely on depth sensors. We use stereo vision, parallax motion, intuitive physics, and feedback from all our sensory systems to create a mental map of the environment. Our perception of the world is not perfect and can be duped, but it’s good enough to make us excellent navigators of the physical world most of the time.
I'm a CS/P
Well, depends what U mean, by build, and what size.
build: construct (something) by putting parts or material together.
size: the relative extent of something; a thing's overall dimensions or magnitude; how big something is.
Actually, I have absolutely no doubt/s that our cosmos is a simulation, and that we are VR.And I'm not the only one, who thinks so, of course.(The universe is my real/physical/HW based/classical model and the cosmos is my SW based/virtual/quantum model).
cos·mos: the universe seen as a well-ordered whole.
universe: all existing matter and space considered as a whole; the cosmos.
Therapeutic approach developed by Weizmann Institute scientists could spell new hope in the battle against COVID-19.Even though vaccines may be steering the world toward a post-pandemic normal, a constantly mutating SARS-CoV-2 necessitates the development of effective drugs. In a new study published in Nature Microbiology, Weizmann Institute of Science researchers, together with collaborators from the Pasteur Institute, France, and the National Institutes of Health (NIH), USA, offer a novel therapeutic approach to combating the notorious virus. Rather than targeting the viral protein responsible for the virus entering the cell, the team of researchers addressed the protein on our cells’ membrane that enables this entry. Using an advanced artificial evolution method that they developed, the researchers generated a molecular “super cork” that physically jams this “entry port,” thus preventing the virus from attaching itself to the cell and entering it.
Active ingredient inhibits infection with so-called pseudoviruses in the test tube, as shown by study at the University of Bonn.Scientists at the University of Bonn and the caesar research center have isolated a molecule that might open new avenues in the fight against SARS coronavirus 2. The active ingredient binds to the spike protein that the virus uses to dock to the cells it infects. This prevents them from entering the respective cell, at least in the case of model viruses. It appears to do this by using a different mechanism than previously known inhibitors. The researchers therefore suspect that it may also help against viral mutations. The study will be published in the journal Angewandte Chemie and is already available online.The novel active ingredient is a so-called aptamer. These are short chains of DNA, the chemical compound that also makes up chromosomes. DNA chains like to attach themselves to other molecules; one might call them sticky. In chromosomes, DNA is therefore present as two parallel strands whose sticky sides face each other and that coil around each other like two twisted threads.
What makes me think our cosmos is a simulation? All the quantum paradoxes.I have absolutely no doubt/s that something like that is possible only inside a computer, by a computer.