0 Members and 1 Guest are viewing this topic.
https://www.science.org/doi/10.1126/science.abj5089Epigenetic patterns in a complete human genomeAbstractThe completion of a telomere-to-telomere human reference genome, T2T-CHM13, has resolved complex regions of the genome, including repetitive and homologous regions. Here, we present a high-resolution epigenetic study of previously unresolved sequences, representing entire acrocentric chromosome short arms, gene family expansions, and a diverse collection of repeat classes. This resource precisely maps CpG methylation (32.28 million CpGs), DNA accessibility, and short-read datasets (166,058 previously unresolved chromatin immunoprecipitation sequencing peaks) to provide evidence of activity across previously unidentified or corrected genes and reveals clinically relevant paralog-specific regulation. Probing CpG methylation across human centromeres from six diverse individuals generated an estimate of variability in kinetochore localization. This analysis provides a framework with which to investigate the most elusive regions of the human genome, granting insights into epigenetic regulation.
OpenAI (@OpenAI) tweeted at 9:07 PM on Wed, Apr 06, 2022:Our newest system DALL·E 2 can create realistic images and art from a description in natural language. See it here: https://t.co/Kmjko82YO5 https://t.co/QEh9kWUE8A(https://twitter.com/OpenAI/status/1511707245536428034?t=u1xywMQQXbQTgV4AM_ceHA&s=03)
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language.DALL·E 2 has learned the relationship between images and the text used to describe them. It uses a process called “diffusion,” which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.
http://www.incompleteideas.net/IncIdeas/BitterLesson.html?s=03The Bitter LessonRich SuttonMarch 13, 2019The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. There were many examples of AI researchers' belated learning of this bitter lesson, and it is instructive to review some of the most prominent.The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
Here's another impressive progress in AI.
Optical illusions are fun, but they can also teach us a lot about how our brains work. In particular, how our brains accomplish the incredible feat of constructing a three-dimensional reality using nothing but 2-D images from our eyes. A young artist and psychology researcher named Adelbert Ames, Jr. developed a series of illusions that help us understand how this process of constructing reality actually works. Sometimes we need to be fooled in order to gain understanding.
The Bitter Lesson in AI researches.
https://nautil.us/deep-learning-is-hitting-a-wall-14467/In November 2020, Hinton told MIT Technology Review that “deep learning is going to be able to do everything.”4I seriously doubt it. In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.”5 Turning the tide, and getting to AI we can really trust, ain’t going to be easy.In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.
What should we do about it? One option, currently trendy, might be just to gather more data. Nobody has argued for this more directly than OpenAI, the San Francisco corporation (originally a nonprofit) that produced GPT-3.In 2020, Jared Kaplan and his collaborators at OpenAI suggested that there was a set of “scaling laws” for neural network models of language; they found that the more data they fed into their neural networks, the better those networks performed.10 The implication was that we could do better and better AI if we gather more data and apply deep learning at increasingly large scales. The company’s charismatic CEO Sam Altman wrote a triumphant blog post trumpeting “Moore’s Law for Everything,” claiming that we were just a few years away from “computers that can think,” “read legal documents,” and (echoing IBM Watson) “give medical advice.”For the first time in 40 years, I finally feel some optimism about AI. Maybe, but maybe not. There are serious holes in the scaling argument. To begin with, the measures that have scaled have not captured what we desperately need to improve: genuine comprehension. Insiders have long known that one of the biggest problems in AI research is the tests (“benchmarks”) that we use to evaluate AI systems. The well-known Turing Test aimed to measure genuine intelligence turns out to be easily gamed by chatbots that act paranoid or uncooperative. Scaling the measures Kaplan and his OpenAI colleagues looked at—about predicting words in a sentence—is not tantamount to the kind of deep comprehension true AI would require.What’s more, the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever, much like Moore’s law, a trend in computer chip production that held for decades but arguably began to slow a decade ago.11Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.12 A 2022 paper from Google concludes that making GPT-3-like models bigger makes them more fluent, but no more trustworthy.13Such signs should be alarming to the autonomous-driving industry, which has largely banked on scaling, rather than on developing more sophisticated reasoning. If scaling doesn’t get us to safe autonomous driving, tens of billions of dollars of investment in scaling could turn out to be for naught.
Some other AI researchers don't seem to agree with the conclusion above
https://www.zdnet.com/article/microsoft-veteran-bob-muglia-relational-knowledge-graphs-will-transform-business/We're at the start of a whole new era' with knowledge graphs, says Microsoft veteran Bob Muglia, akin to the arrival of the modern data stack in 2013.Microsoft veteran Bob Muglia: Relational knowledge graphs will transform business'We're at the start of a whole new era' with knowledge graphs, says Microsoft veteran Bob Muglia, akin to the arrival of the modern data stack in 2013.Bob Muglia says twenty years of work on database innovation will bring the relational calculus of E.F. Codd to knowledge graphs, what he calls "relational knowledge graphs," to revolutionize business analysis.Bob Muglia is something of a bard of databases, capable of unfurling sweeping tales in the evolution of technology. That is what Muglia, former Microsoft executive and former Snowflake CEO, did Wednesday morning during his keynote address at The Knowledge Graph Conference in New York.The subject of his talk, "From the Modern Data Stack to Knowledge Graphs," united roughly fifty years of database technology in one new form.The basic story is this: Five companies have created modern data analytics platforms, Snowflake, Amazon, Databricks, Google, and Azure, but those data analytics platforms can't do business analytics, including, most importantly, representing the rules that underly compliance and governance. "The industry knows this is a problem," said Muglia. The five platforms, he said, representing "the modern data stack, have allowed a "new generation of these very, very important data apps to be built." However, "When we look at the modern data stack, and we look at what we can do effectively and what we can't do effectively, I would say the number one problem that customers are having with all five of these platforms is governance." "So, if you wanted to perform a query to say, 'Hey, tell me all of the resources that Fred Jones has access to in this organization' — that's a hard query to write," he said. "In fact, it's a query that probably can't execute effectively on any modern SQL database if the organization is very large and complex."The problem, said Muglia, was that the algorithms based off of structured query language, or SQL, can't do such complex "recursive" queries.
https://twitter.com/ylecun/status/1526672565233758213?t=ryNVncrigCsgvQqm_oQFUA&s=03About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:(0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized.1/N(1) the research community is making *some* progress towards HLAI(2) scaling up helps. It's necessary but not sufficient, because....(3) we are still missing some fundamental concepts2/N(4) some of those new concepts are possibly "around the corner" (e.g. generalized self-supervised learning)(5) but we don't know how many such new concepts are needed. We just see the most obvious ones.(6) hence, we can't predict how long it's going to take to reach HLAI.3/NI really don't think it's just a matter of scaling things up.We still don't have a learning paradigm that allows machines to learn how the world works, like human anspd many non-human babies do.4/NSome may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.Others believe "reward is enough".Yet others believe that explicit symbol manipulation is necessary.A few don't believe gradient-based learning is part of the solution.5/NI believe we need to find new concepts that would allow machines to:- learn how the world works by observing like babies.- learn to predict how one can influence the world through taking actions.6/N- learn hierarchical representations that allow long-term predictions in abstract spaces.- properly deal with the fact that the world is not completely predictable.- enable agents to predict the effects of sequences of actions so as to be able to reason & plan7/N- enable machines to plan hierarchically, decomposing a complex task into subtasks.- all of this in ways that are compatible with gradient-based learning.The solution is not just around the corner. We have a number of obstacles to clear, and we don't know how.8/N
Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
QuoteI really don't think it's just a matter of scaling things up.We still don't have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do.
I really don't think it's just a matter of scaling things up.We still don't have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do.
https://twitter.com/fchollet/status/1528111120648572928?t=2cQMwifGgHohHCFHYkF8yQ&s=03The dominant intellectual current in AI research today is the belief that we can (and soon will) create human-level AI without having to understand how the mind works (and without even having a proper definition of intelligence), through pure behaviorism and gradient descent.That's fundamentally wrong.It reminds me of an earlier belief that we could recreate the mind by simulating the brain in fine-grained detail, without having to understand how it works beyond the micro-level. That was a similar kind of mistake.A more correct take is the reverse take: if you understand how the mind works at a high-level, then you no longer need to understand the fine-grained details, because you can recreate those details in a different (and perhaps more efficient) form.
https://www.technologyreview.com/2022/05/23/1052627/deepmind-gato-ai-model-hype/Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play Atari video games, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do 604 different tasks. But while Gato is undeniably fascinating, in the week since its release some researchers have gotten a bit carried away.
Here's the video.
QuoteSome may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.Others believe "reward is enough".Yet others believe that explicit symbol manipulation is necessary.
Some may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.Others believe "reward is enough".Yet others believe that explicit symbol manipulation is necessary.
QuoteI believe we need to find new concepts that would allow machines to:-- learn hierarchical representations that allow long-term predictions in abstract spaces.
I believe we need to find new concepts that would allow machines to:-- learn hierarchical representations that allow long-term predictions in abstract spaces.
TLDR of the paper this video is based on:ConclusionsIn this paper, we have demonstrated that the MuZero reinforcement learning algorithm can be usedfor rate control in VP9. Our formulation of the self-competition based reward mechanism allowsthe agent to tackle the complex constrained optimization task and achieve better quality-bitratetradeoff and better bitrate constraint satisfaction than libvpx’s VBR rate control algorithm. The finalagent results in 6.28% average reduction in bitrate (measured as PSNR BD-rate) on videos from theevaluation set, and can be readily deployed in libvpx via the SimpleEncode API.Limitations: The self-competition based reward mechanism requires that every unique [video,target bitrate] pair be encoded a few times so that the historical performance converges and providesa reasonable baseline for reward computation. Because of this, the amount of data the actors need togenerate increases linearly with the number of videos in the training dataset and the number of targetbitrate samples. For very large training datasets, this method might not scale well. However, in futurework, it may be possible to learn these baseline values based on observations using a neural networkwhich can generalise to unseen videos in a large dataset.Future Work: Our proposed methods are agnostic to the specifics of VP9/libvpx, and they canpotentially be generalized not only to other coding formats and implementations, but also to othercomponents within video encoders such as block partitioning and reference frame selection. Ourmethod also opens the possibility of allowing codec developers and users to develop new rate controlmodes. For example, we can replace PSNR with other video quality metrics such as VMAF. We canalso modify the reward to minimize bitrate given a minimum PSNR constraint – which is similar tothe constrained quality (CQ) mode in libvpx, but reinforcement learning is likely to learn a policythat has more precise control of the PSNR.