The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. Profile of MaeveChondrally
  3. Show Posts
  4. Messages
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts
      • Messages
      • Topics
      • Attachments
      • Thanked Posts
      • Posts Thanked By User
    • Show User Topics
      • User Created
      • User Participated In

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments
  • Thanked Posts
  • Posts Thanked By User

Messages - MaeveChondrally

Pages: [1]
1
The Environment / Re: The limit of climate change?
« on: 02/04/2022 03:59:44 »
if you google 'how much oxygen does the ocean produce?'  it says 50 to 85% of the worlds oxygen has been produced by plankton in the ocean,  not the trees, plants or bacteria
plankton population is 40% of what it was in 1955 and is dying and continuing to die because of temperature, pollution and pH changes. 
when the calcium carbonate buffer saturates at 493 +/-5 ppm in 2051 or sooner,  the pH will suddenly decrease by .5 pH units and may kill off the rest of the plankton and kill the base of the food chain in the ocean!
I believe their estimates of the pH at 2051 are wrong.  they are saying it will be 7.8 at 2100 and i think it is more like 7.3 then due to the saturation of the calcium carbonate buffer at 2051,  and we must make sure we do not reach this point.

check out videos....
youtube.com
introduction to ocean chemistry part 1 by andrew G. Dickson
and introduction to ocean chemistry
and
ocean acidity,  why should we care!

2
The Environment / Can we get rid of plastic in dumps and the ocean with an enzyme from a microbe?
« on: 07/02/2022 16:47:32 »
Is there an enzyme that decomposes plastic that is secreted by a naturally occuring microbe?
Can we put its DNA in algae and get rid of the plastic in the ocean?
Is this type of genetic engineering safe?
Is it really our only option for getting rid of this plastic in a reasonable period of time?
Would the microbe be safe to unleash on the environment?

3
The Environment / The limit of climate change?
« on: 07/02/2022 08:15:11 »
What happens to the pH or acidity of the ocean as the CO2 level rises?
Does the calcium carbonate or magnesium carbonate buffer saturate? when?
could it kill the phytoplankton , algae and seaweed in the ocean  if it changes discontinuously?
would it cut the planet off from a lot of oxygen at this point?  How much would we lose?
How much oxygen is being depleted by fossil fuels?
would this lack of oxygen be fatal for humans at that point?
Would it kill the base of the food chain in the ocean?
would this kill all the marine life?
at what CO2 level will this happen and when?
What can we do to stop the CO2 level from reaching this point?
i think there has been a misunderstanding the oxygen level will drop when the pH level changes discontinuously because it will kill the plants.
What is the maximum temperature increase we can allow to keep the CO3 level below this critical limit with a safety margin and how big should this safety margin be?

4
New Theories / grand unified theory , the path of light and its relation to negative entropy.
« on: 05/02/2022 19:25:20 »
E=m*c^2 and E=hf , Einsteins formula nad Planks formula for the energy of a photon give m=hf/c^2 so photons have an equivalent mass and it is quantum mechanical duality , light is both mass and energy.  E=m*c^2 so mass is equivalent to energy and it is usually a different phase.  light is pure energy and has an equivalent mass.  if suns shine for 10 billion years that is a heck of a lot of mass out in space if you consider all the galaxies and all the photons emitted over all time in the past..  This could account for all the extra mass that does have gravity.  light does have momentum and if you look at the photoelectric effect photons being absorbed by a solar panel exchange momentum and there is a measurable force and displacement that can be measured on the panel.  light has equivalent mass and has self gravity.  it is very small for most photons but not for the amount  of photons over 10 billion years for an entire galaxy.  i'm sure it is quite a considerable amount of mass and gravity.  similarly a black hole does attract and absorb light or consume it and light if it had no gravity it would not be attracted to black holes this implies it has mass or the equivalent of it.  light probably attracts mass and is attracted to it and it would be a modification of general relativity and it would be quantum gravity.  i see the way forward as one of stopping nonsensical dark matter and dark energy research and start mapping the light spectrum and location in spacetime of energy and mass so that they can predict orbits of galaxies and that of their suns.  dark matter is light and dark energy is light.  most of it cannot be seen because it is not in the visible spectrum.  RADIO waves, microwaves, infra red, ultraviolet, T rays, x rays, gamma rays and cosmic rays.  the frequency of some of them is very high and it is quite a lot of mass out there.   i believe it has been a misunderstanding before.  it can actually be seen with special detectors on satellites with telescopes.  the chandra satellite has a telescope and xray cameral on it.  Its also Young's double slit experiment where if you have one slit and fire photons at it separately it creates a bullet pattern on a film on the other side of the slit.  if you have two slits and fire single photons at it then you get an interference pattern on the film with magnitudes commensurate with a probability distribution.  this is quantum mechanical duality where light is both a particle and a wave. And you have two eyes to see with that create a probability distribution imagnitude interference pattern in your visual cortex which creates depth vision and a mapping of spacetime into your cognition with memory.
Occams razor says that the simplest hypothesis that explains the observed phenomena is probably true.
Try a different approach to calculation.  calculate the amount of photons emitted from a star in a second from its spectrum and calculate m=hf/c^2  for each frequency in the spectrum and then multiply up to 10 billion years or the average age of suns and then multiply by 100 billion suns for a galaxy like the milky way.  suns acrete matter and hydrogen over millenia.  see how much mass that becomes for a single galaxy and then see if it adds up to the dark matter mass!
Apparently light is actually held back from getting here by self gravity and the other mass out there and that explains why the night sky is dark and explains olbers paradox.

I'm trying to find out from google scholar where the numbers 27% and 5% of the universes mass for dark matter and normal periodic table matter and light come from respectively.  I am finding it difficult to find out where those numbers are proved.  does anybody have a reference.  i found one good survey paper by Jean Einasto from 2013 in the journal of physics called darjk matter.  but proof of those numbers and the observations that made them possible and the analysis of the evidence seems to be missing from the field!
The gravitational lensing paper data has to be reanalyzed in the light of quantum gravity and m=h/(lambda*c) for the mass of the photon.  This will change the gravitational lensing result by quite a  bit and the estimates of dark matter in the galaxies.  Remember back in 1933 the Zwicky Mount Wilson data was only for visible light and black holes could not be seen back than.

Astronaut/Commander Chris Hadfield's experiment on detecting dark matter failed on the international space station on his last mission!
The voyager space probe is far out in space and is probably a little off trajectory.  The retro rocket fuel necessary to correct the trajectrory is a bit too expensive to use.  perhaps if dark matter existed,  the voyager would be further off trajectory than it is.... this should be checked.

The Laser Interferometry Gravitational Observatory (LIGO ) gravitational lensing papers should be revisited in light of quantum gravity.
2 new uncertainty principles arrive due to f= 2pic^2m/h
they are for the photon travellling in time t,    delta t * delta m >= h^2/4/pi^2/c^2     and delta t * delta E >= h^2/4/pi^2

and these both come from the signal processing uncertainty principle   delta t * delta f >= h/2/pi
many thanks to Ed Jernigan and Mike Fich and Paul Wesson of the Royal Society and University of Waterloo
Paual actually got his PhD at Cambridge and is a good sci fi author.  Ed got his PhD at MIT and Mike got his at Berkeley
Many thanks to ken fleming too of Oxford who is a prof in the medical faculty
and thanks to Graham Wright of UT iin the department of medical biiphysics an MRI prof who got his degrees at Stanford and Waterloo
and Claude Nahmias of McMaster(Retired) who was in the dpartment of Nuclear Medicine and Invented Positron Emission Tomography
I'd like to thank Polkinghorn for his beautiful little book quantum world who worked with Feynman and is a CofE church minister who worked with Feynman back in the day and of course Feynman for all his work on path integrals.
i'd also like to thank Kolmogorov for all his work on Real Analysis and his work on turbulence and nonlinear systems.
I'd like to ppothumously thank my brother Peter Belshaw who got his PhD from Harvard in organic chemistry for teaching me cehmistry and his many discussions about astrophysics.
and Kathy Bowman for unflagging support and friendship for teaching me algebra and relations and functions .and Mrs Kewley the scottish math teacher for teaching me calculus.and the histrory of math and getting me interested in Newton and Archimedes the inventor of integral calculus.and mrs kogan getting me interested in the wave equation, youngs double slit experiment and kepler and tycho brahe. and mr kee for chemistry education and thermodynamics including entropy and enthalpy.
And i'd like to thank Neil Degrassi Tyson and especially Carl Sagan.

a spacetimefrequency distance is probably E/c^2*ds^2 = E/c^2(dx^2 +dy^2 + dz^2) - hf*dt^2 and this is also spacetime distance and the uncertainty principles apply and it is only valid for photons. or hf/c^2*dv^2=m(dvx^2 + dvy^2 + dvz^2) -E where vx is the speed in x direction. I believe there is some energy data and some frequency data and this should be used to check the result. it does reduce to the normal 4D Einstein spacetime ds^2 =. dx^2 + dy^2+dz^2 - c^2*dt^2 when we divide by hf/c^2. f dt^2 reminds me of a chirp in signal processing.
spacetimeEnergy is quantized and energy levels and frequency are quantized and photon is a wavepacket.
Mds^2 = M(dx^ 2+ dy^2 + dz^2) - Mc^2*dt^2 and this is mass spacetime and this is valid throughout the universe.
or
E/c^2 = E/c^2*(dx^2 + dy^2 + dz^2) - E*dt^2 and this is energy space time
basically for macroscopic phenomena these last two equations reduce to 4D and give the normal Einstein spacetime.
for light,   quantum gravity affects the path of light and also affects fusion research and nuclear research. i believe that their might be discrepancies. between the frequency and Energy data and this equation would highlight the errors if any.
 and affects the age and mass calculations for the universe.
also temperature in the desert affects the index of refraction in the atmosphere and changes the path of light based on the phenomena of mirages.  so the temperature causes the path of light to change aswell.
i believe that the feynman path integral of least action needs to be modified for the path of light by quntum gravity (mass of photon).
and maxwells equations are not quite right because they do not take into account phase of light and the mass and gravity of light.
Without energy, mass and light distributions known the path of light in the universe cannot be predicted.  i would say the index of refraction is important too.  this amounts to a 5D theory of spacetime and matter.  the distributions of light energy and mass need to be measured with light. and quantum gravity affects all these measurements.
with temperature and electromagnetism and the weak and strong nuclear forces and  entropy/negative entropy this amounts to a 10D theory and admits the possibility of conscious life.
Indeed if matter is conscious , alive and intelligent then light might take the path of most negative entropy.
we also need to mention information, knowledge and DNA.


by the way i cannot respond with replies to posts because of junior member status and the interface here.  i can only respond with thanks , or best answer.

also if dark matter did exist and the PhD saying that light passes right through it without affecting it is laughable , because the dark matter would have to be homogenously distributed and that would be impossible if it had mass because it would have gravity and it would clump and spiral.  it would form planets and asteroids.  it doesn't. also if it had mass and gravity and existed than it would affect the path of light and bend light if it that much mass.  it doesn't affect light, which has gravity and i believe it has mass too. because of the phtoelectric effect and Planck and Einstein.  so it cannot exist or it has no mass which defeats the purpose.  I still believe the mass of the universe has not been calculated accurately or correctly.   dark matter also has not been detected at CERN in the cloud chambers of the high energy linear collider.  so it is unlikely that such particles exist.  if they were too massive to be dtected at CERN than a linear collider would have to be the size of the solar system to possibly detect them. and that won't happen probably for a thousand years or 10000 years.  iat the current time occams razer should be employed and the simplest explanation that might account for olbers paradox, the path of light is that light phtons have relativistic mass, however small, and that a calculational error, due to incorrect formulation of the equations,  is probably most likely.

thankyou for the post that was about the amount of energy or matter converted to energy every second in the sun
4 x10^6ttons/second*10^3kg/tom*3600 seconds/hour*24hours/day*365days/year*10*10^9 yeears/sun*100*10^9suns=

1.26*10^38 kg in the milky way approximately there is also all the mass of the suns and the solar wind and the neutrinos  and black holes.
and after 10  billion years, our sun becomes a red giant.
i still think the light distribution of the sun should be checked and computed in this calculation as there may be many more reactions on the sun that are not fusion and the sun probably accretes matter over time.

the post about the iron ball and fusion was probably an oblique refernce to my period and may have been mysogenous
27th element being iron or Fe=fee=fillle=girl and it was hard to squirm out of. The Russians invented the periodic table and no wonder it hurts the period.  he probably doesn't know that in climate change research,  27% of the emissions problem is from electric fossil fuel power plants globally.  curious this is also the magic 27% hmmm and we have found a solution for it,  capture the CO2 and turn it into diesel with the Fisher Tropsch process.
27% is probably or nearly the number of  people that followed Hitler blindly in WWII.  its probalby the number of people in the population that can be hypnotized. it is also the number of people thay need to win a general election.
it is also the age of the biological clock in women when the majority of them decide to get married and pop out sprogs

5
New Theories / Suns shining for 10 billion years might account for mass of dark matter
« on: 25/01/2022 17:07:43 »
E=m*c^2 and E=hf , Einsteins formula nad Planks formula for the energy of a photon give m=hf/c^2 so photons have an equivalent mass and it is quantum mechanical duality , light is both mass and energy.  E=m*c^2 so mass is equivalent to energy and it is usually a different phase.  light is pure energy and has an equivalent mass.  if suns shine for 10 billion years that is a heck of a lot of mass out in space if you consider all the galaxies and all the photons emitted over all time in the past..  This could account for all the extra mass that does have gravity.  light does have momentum and if you look at the photoelectric effect photons being absorbed by a solar panel exchange momentum and there is a measurable force and displacement that can be measured on the panel.  light has equivalent mass and has self gravity.  it is very small for most photons but not for the amount  of photons over 10 billion years for an entire galaxy.  i'm sure it is quite a considerable amount of mass and gravity.  similarly a black hole does attract and absorb light or consume it and light if it had no gravity it would not be attracted to black holes this implies it has mass or the equivalent of it.  light probably attracts mass and is attracted to it and it would be a modification of general relativity and it would be quantum gravity.  i see the way forward as one of stopping nonsensical dark matter and dark energy research and start mapping the light spectrum and location in spacetime of energy and mass so that they can predict orbits of galaxies and that of their suns.  dark matter is light and dark energy is light.  most of it cannot be seen because it is not in the visible spectrum.  RADIO waves, microwaves, infra red, ultraviolet, T rays, x rays, gamma rays and cosmic rays.  the frequency of some of them is very high and it is quite a lot of mass out there.   i believe it has been a misunderstanding before.  it can actually be seen with special detectors on satellites with telescopes.  the chandra satellite has a telescope and xray cameral on it.  Its also Young's double slit experiment where if you have one slit and fire photons at it separately it creates a bullet pattern on a film on the other side of the slit.  if you have two slits and fire single photons at it then you get an interference pattern on the film with magnitudes commensurate with a probability distribution.  this is quantum mechanical duality where light is both a particle and a wave. And you have two eyes to see with that create a probability distribution imagnitude interference pattern in your visual cortex which creates depth vision and a mapping of spacetime into your cognition with memory.
Occams razor says that the simplest hypothesis that explains the observed phenomena is probably true.
Try a different approach to calculation.  calculate the amount of photons emitted from a star in a second from its spectrum and calculate m=hf/c^2  for each frequency in the spectrum and then multiply up to 10 billion years or the average age of suns and then multiply by 100 billion suns for a galaxy like the milky way.  suns acrete matter and hydrogen over millenia.  see how much mass that becomes for a single galaxy and then see if it adds up to the dark matter mass!
Apparently light is actually held back from getting here by self gravity and the other mass out there and that explains why the night sky is dark and explains olbers paradox.

I'm trying to find out from google scholar where the numbers 27% and 5% of the universes mass for dark matter and normal periodic table matter and light come from respectively.  I am finding it difficult to find out where those numbers are proved.  does anybody have a reference.  i found one good survey paper by Jean Einasto from 2013 in the journal of physics called darjk matter.  but proof of those numbers and the observations that made them possible and the analysis of the evidence seems to be missing from the field!
The gravitational lensing paper data has to be reanalyzed in the light of quantum gravity and m=h/(lambda*c) for the mass of the photon.  This will change the gravitational lensing result by quite a  bit and the estimates of dark matter in the galaxies.  Remember back in 1933 the Zwicky Mount Wilson data was only for visible light and black holes could not be seen back than.

Astronaut/Commander Chris Hadfield's experiment on detecting dark matter failed on the international space station on his last mission!
The voyager space probe is far out in space and is probably a little off trajectory.  The retro rocket fuel necessary to correct the trajectrory is a bit too expensive to use.  perhaps if dark matter existed,  the voyager would be further off trajectory than it is.... this should be checked.

The Laser Interferometry Gravitational Observatory (LIGO ) gravitational lensing papers should be revisited in light of quantum gravity.
2 new uncertainty principles arrive due to f= 2pic^2m/h
they are for the photon travellling in time t,    delta t * delta m >= h^2/4/pi^2/c^2     and delta t * delta E >= h^2/4/pi^2

and these both come from the signal processing uncertainty principle   delta t * delta f >= h/2/pi
many thanks to Ed Jernigan and Mike Fich and Paul Wesson of the Royal Society and University of Waterloo
Paual actually got his PhD at Cambridge and is a good sci fi author.  Ed got his PhD at MIT and Mike got his at Berkeley
Many thanks to ken fleming too of Oxford who is a prof in the medical faculty
and thanks to Graham Wright of UT iin the department of medical biiphysics an MRI prof who got his degrees at Stanford and Waterloo
and Claude Nahmias of McMaster(Retired) who was in the dpartment of Nuclear Medicine and Invented Positron Emission Tomography
I'd like to thank Polkinghorn for his beautiful little book quantum world who worked with Feynman and is a CofE church minister who worked with Feynman back in the day and of course Feynman for all his work on path integrals.
i'd also like to thank Kolmogorov for all his work on Real Analysis and his work on turbulence and nonlinear systems.
I'd like to ppothumously thank my brother Peter Belshaw who got his PhD from Harvard in organic chemistry for teaching me cehmistry and his many discussions about astrophysics.
and Kathy Bowman for unflagging support and friendship for teaching me algebra and relations and functions .and Mrs Kewley the scottish math teacher for teaching me calculus.and the histrory of math and getting me interested in Newton and Archimedes the inventor of integral calculus.and mrs kogan getting me interested in the wave equation, youngs double slit experiment and kepler and tycho brahe. and mr kee for chemistry education and thermodynamics including entropy and enthalpy.
And i'd like to thank Neil Degrassi Tyson and especially Carl Sagan.

a spacetimefrequency distance is probably E/c^2*ds^2 = E/c^2(dx^2 +dy^2 + dz^2) - hf*dt^2 and this is also spacetime distance and the uncertainty principles apply and it is only valid for photons. or hf/c^2*dv^2=m(dvx^2 + dvy^2 + dvz^2) -E where vx is the speed in x direction. I believe there is some energy data and some frequency data and this should be used to check the result. it does reduce to the normal 4D Einstein spacetime ds^2 =. dx^2 + dy^2+dz^2 - c^2*dt^2 when we divide by hf/c^2. f dt^2 reminds me of a chirp in signal processing.
spacetimeEnergy is quantized and energy levels and frequency are quantized and photon is a wavepacket.
Mds^2 = M(dx^ 2+ dy^2 + dz^2) - Mc^2*dt^2 and this is mass spacetime and this is valid throughout the universe.
or
E/c^2 = E/c^2*(dx^2 + dy^2 + dz^2) - E*dt^2 and this is energy space time
basically for macroscopic phenomena these last two equations reduce to 4D and give the normal Einstein spacetime.
for light,   quantum gravity affects the path of light and also affects fusion research and nuclear research. i believe that their might be discrepancies. between the frequency and Energy data and this equation would highlight the errors if any.
 and affects the age and mass calculations for the universe.
also temperature in the desert affects the index of refraction in the atmosphere and changes the path of light based on the phenomena of mirages.  so the temperature causes the path of light to change aswell.
i believe that the feynman path integral of least action needs to be modified for the path of light by quntum gravity (mass of photon).
and maxwells equations are not quite right because they do not take into account phase of light and the mass and gravity of light.
Without energy, mass and light distributions known the path of light in the universe cannot be predicted.  i would say the index of refraction is important too.  this amounts to a 5D theory of spacetime and matter.  the distributions of light energy and mass need to be measured with light. and quantum gravity affects all these measurements.
with temperature and electromagnetism and the weak and strong nuclear forces and  entropy/negative entropy this amounts to a 10D theory and admits the possibility of conscious life.
Indeed if matter is conscious , alive and intelligent then light might take the path of most negative entropy.

6
New Theories / Dark Matter may not exist! A correction to relativity might solve it and olbers!
« on: 05/05/2021 06:27:35 »
if light has self gravity and each photons apparent mass is hf/c^2 and relativity is adjusted for this then we have a 5D model with Energy/Mass as the 5th dimension and spacetime as well (x,y,z,ct) then olbers paradox might be solved and explain why the night sky is dark mostly and explain that light attracts itself.

7
New Theories / Least Percent Error, NOT Least Squares Error
« on: 12/02/2021 04:55:50 »
Least Percent Error
F = sum from i=1 to N,  (1 - (m*ln(xi)+b)/ln(yi) )^2
diF/dim = sum ( 1 - (m*ln(xi + b)/ln(yi) )*(ln(xi)/ln(yi) ) = 0
diF/db = sum (1 - (m*ln(xi) + b)/ln(yi) )*(1/ln(yi) ) = 0
b = 1/sum(1/ln(yi) )^2*[-sum(1/ln(yi) + m*sum(ln(xi)/(ln(yi) )^2 ]

m=[ sum(1/ln(yi )^2*sum(ln(xi)/ln(yi) )  - sum(1/ln(yi) * sum(ln(xi)/(ln(yi))^2]/[ sum(1/ln(yi) )^2*sum( ln(xi)/ln(yi) )^2   -  ( sum ln(xi)/(ln(yi) )^2 )^2

where m is the slope of the linear logarithmic  line of least percent error among the data points
and b is the y intercept of the linear logarithmic line of least percent error.

this equation is useful in chemistry and astrophysics and image processing and MRI contrast enhancement for finding
a line that is resistant to quantum noise and works over many decades or scales where a line of least square error would be useless.  Especially, this might be generally and theoretically more desirable for all data sets than least squares error.   the distribution of error about each xi can be found with application of software found here:
https://library.wolfram.com/infocenter/MathSource/9086/
This was first discovered by myself in 1982 , but may have been seen elsewhere before or since. i did use it throughout university too from 1983 to 1988.

8
New Theories / Maybe SynGas power plants could save us from CO2
« on: 20/01/2021 20:13:06 »
I have written a book 'Ocean Acidity Climate Shock' where at 493 ppm CO2 concentration level the Ocean pH will suddenly become more acidic,  along with temperature changes, this would probably kill the phytoplankton and algae and other ocean plants,  causing the loss of half the world's oxygen (O2) supply, and killing the base of the food chain in the ocean, causing the loss of the fisheries, and a lot more.
70% of the CO2 rise is caused approximately by coal, natural gas and oil power plants worldwide.  If a SynGas power plant was paired with each hydrocarbon power plant,  it could work synergetically with it to capture all its CO2, and turn that gas into carbon monoxide (CO) catalytically and it could be mixed with hydrogen (H2) from electrolysis of water (H2O) creating SynGas fuel that can be burned in gas turbines just like natural gas to generate electricity.  Cryogenics can be used to separate the nitrous oxides, CO2 and H2O cheaply.  Together, the hydrocarbon plant would supply enough power to bootstrap the Syngas plant and once working, it should tick over nicely.  Careful design of the symbiosis is required but possible.  Syngas is being used on the MIR space station to generate oxygen, water, electricity and remove CO2 from the capsule atmosphere.
The first SynGas power plant could be designed and built for 50-60 million poinds. The marginal cost of subsequent plants would cost 5-10 million poinds approximately. the same cost as a natural gas power plant and deliver approximately the same power as one.  Propane tanks would be needed to store syngas fuel.
This approach could worldwide, stop CO2 increases dead in their tracks, and be economically viable and provide jobs for a long time.  We have about 10-20 years to accomplish this task before time runs out.  Covid has temporarily caused a cessation of Daily CO2 (check this website) increases but barely due to the lack of car and truck travel under the pandemic.  This is a temporary hiatus, and is likely to be over once the virus has been stopped worldwide then the increases in CO2 will come back. 
Hooray for Joe and Kamala,  the Paris Climate Change Accord is going to be observed in America again as of today!

9
New Theories / Fermat,Feynman,Hawking,Penrose and the optimal path of light
« on: 16/01/2021 23:53:41 »
We need to distinguish between feel good tame biological light like humour that may be moist and harsh radioactive light that might hurt a biological entity or kill it.  John Dowland gave the first reference to tame light in his lute music lyrics.
Fermat arguably is very famous for his jurisprudence and his path integral of least time.  He may actually be the father of quantum mechanics because of his path integral.
Feynmean updated it and created the path integral of least action whcih actually works with harsh light.  Feynman died of cancer.
The path of least gravity and least time might actually be best very often.
Ultimately in the far future,  the path of least entropy and most negative entropy might be the most intelligent path for consious living light but that might be a bit greedy!
food for thought.
and what about the path of least harsh light and best sound or best smell, or best touch or best taste.and best feeling and best thought.. and is this realistic or true or purely imaginary.
I suggest you ask the Dalai Lama what conscious living light is!  Have you heard about the way they remove gall stones and kidney stones by chanting! ultrasonics

10
New Theories / Alternate take on Fermats Last Theorem
« on: 01/01/2021 15:36:06 »
No offense to andrew Wiles and his very interesting but incomprehensible proof. and all those pure mathematicians who demand an exact solution when an infinitessimally close approximation will sufffice.

a^n+b^n=c^n
for any integers a,b,c and any n integer greater than 2.  the theorem states that there are no solutions.
wiles proves this in his magnum opus and indeed there are no exact solutions.
look
2^5+3^5=3.07515165743^5
if we admit real solutions
then we can say that for some integer k,   (10^k*a)^n+(10^k*b)^n=(10^k*c)^n where 10^k*c is truncated to k+1 digits and it becomes an integer
so we could write
2000000000^5+3000000000^5=3075151657^5
is accurate to 1 part in 10^9 which is accurate enough for most scientific calculations and engineering calculations
so if we are pragmatic,  there is an infinite number of infinitessimally close solutions to Fermats last theorem for every n.  k should be greater than about 6 to achieve an accuracy of 1 part per milliion.

best wishes and happy new year
Richard Belshaw
aka
Maeve Chondrally

11
New Theories / Entopy of a statistical distribution and information temperature
« on: 18/12/2020 22:10:58 »
s=c*NkT*integral from -inf to inf of    ln(p(x)*(1-p(x))) dx
c is a constant based on the normal distribution.  N is the number of moles or it could be the number of data points in the numerical distribution., k is boltzmans constant and T is the temperature in kelvin. This formula is absulutely derivable from Statistical Mechanics and came from examining equations in Statistical Mechanics, 3rd Edition by Raj Pathria.
if you just want to assess the entropy of the normal distribution then
set N=1Mol=6.023x10^(23) particles per mol and k=boltzmans constant and T=298.13Kelvin
and p(x)=1/sqrt(2pi)*exp(-x^2/2) where x=tan(theta) and dx = sec^2(theta) dtheta
and it can be discretized with h=(pi/2)/1025; so that it can be numerically integrated with the simpsons method 5th order or 2nd order as you wish but do not include the pi/2 value, everything else but.
so p(theta)=1/sqrt(2*pi)*exp(-tan(theta)^2/2)*sec(theta)^2
calculate this probability value for every point from -pi/2 to pi/2 except the two endpoints and sum it all up with simpsons method to see what s is.
Numerical distributions can be calculated with this method as well and all other distributions can be calculated with this method too.
If the entropy value is within 1% of the entropy of a different data set, then at the 1% level it is a normal distribution.  differences in mean and standard deviation should be filtered out of numerical datasets by normalizing the data before testing its entropy to see what distribution it is most like.
best regards,
Maeve Chondrally
aka Chondrally

Pages: [1]
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.058 seconds with 43 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.