https://www.2021hermes.com/
In a new planetary science study published in the British journal Nature Communications on the 21st, a team of Japanese astronomers confirmed that the moon was bombarded by meteoroids (2021 hermes bags) about 800 million years ago. The study of crater scaling law and collision probability shows that an asteroid of more than 400 million billion kilograms collided with the earth.
The process of erosion and surface renewal on the Earth has made it difficult for scientists to study ancient meteoroid impacts and to determine their dates. However, there is another way to understand these impacts, and that is to study craters on the moon, because on the moon, the effects of weathering and erosion are greatly reduced. Astronomer Terada Kentaro and his colleagues at Osaka University in Japan used data from the lunar probe "Lunar Goddess" of the Japan Aerospace Exploration Agency to estimate the formation time of 59 craters with a diameter of more than 20 kilometers on the surface of the moon. The "Moon Goddess" probe was launched in 2007, and its success helped Japan take the first step on the road to the moon. The "Moon Goddess" carries as many as 15 kinds of observation instruments, which can analyze the chemical composition of the moon, mineral distribution, surface characteristics, etc., and the data collected by it are used to study the origin of the moon and help infer the evolution of the earth. According to the data from the probe, the research team found that out of 59 craters larger than 20 kilometers, 8 craters were formed at the same time, including Copernicus crater. Based on the radioactive dating results of the material released from the Copernicus crater and the data obtained from the impact glass chondrules (glass beads formed by meteorite impact) of some Apollo missions, they concluded that the moon is about 800 million years old I experienced an asteroid rain before. According to the collision probability, the research team analyzed that the total mass of these meteoroids was about 30 to 60 times that of the meteorite that caused the Chicxulub impact before the ice age (about 720 million to 635 million years ago). At the same time(replica hermes), the researchers speculated that since asteroid rain has occurred on the moon, a similar event must have occurred on the earth. They used the crater scaling law and collision probability to show that a mass of 400 million to 5 billion. The asteroid of billions of kilograms collided with the earth.
0 Comments
My country’s first Mars planetary probe is about to launch. Mars is the planet most similar to the Earth in the solar system. The prospect of "migrating to Mars" appeared in science fiction novels. The interesting "Mars Archaeology" believes that Mars was hundreds of millions of years ago. It is a blue ocean planet like the earth. Scientific research has also shown that Mars indeed once had an environment suitable for life-with a thick atmosphere and a large amount of liquid water on the surface, but after hundreds of millions of years of denudation by solar wind, today's Mars is only composed of CO2 The thin atmosphere.
Since the 1960s, mankind has embarked on the brave journey of exploring Mars. Countries that have taken steps to explore include the United States, the Soviet Union/Russia, Europe, China, Japan, and India. In 44 exploration missions, step by step attempts were made to approach Mars, orbit Mars, land on Mars, and carry out Martian ground exploration. However, 48% of the exploration missions fully or partially achieved their goals (including 16 in the United States, 2 in the Soviet Union/Russia, 2 in Europe, and 1 in India). Among them, the four U.S. exploration missions Mariner 4, Mars3, Mariner9 and Sojourner respectively created the first history of mankind’s first successful approach to Mars, orbiting Mars, landing on Mars, and conducting Mars ground exploration. Mars rover about to embark on a journey(hermes outlet) From July to August this year, three countries started their journey to Mars-the United States, China and the United Arab Emirates. Among them, my country's "Tianwen-1" Mars rover is expected to be the first in human history to achieve the three goals of orbiting Mars (orbiter), landing on Mars (lander), and carrying out Martian ground exploration (rover) at one time. Deep space exploration mission. In order to achieve this goal, the orbit of "Tianwen No. 1" is divided into five stages as shown in Figure 2: the Earth-Mars transfer stage for 7 months, and about 10 days to enter Mars capture stage (Mars capture stage), 2-3 months Mars orbit parking stage (Mars orbit parking stage), about 5 hours satellite and Mars rover separation and landing stage (Deorbit and landing stage), a period of 90 The ground exploration of the rover on Mars Day and the scientific exploration stage of 1 Mars year. Here, taking the moon exploration as a comparison, so far, the only countries in human history that have successfully sent a lunar rover to the moon are the United States, the Soviet Union, and China, which shows the difficulty of landing on the moon. However, exploration of Mars is more difficult than exploration of the moon. So, why is Mars exploration so difficult? One of the important reasons is that deep space exploration faces very complex environmental challenges. In interplanetary space, deep space probes face the threat of radiation environment, solar wind plasma environment, and electromagnetic radiation environment, while in near Mars space, deep space probes face challenges from the Martian ionosphere, neutral atmosphere, and surface radiation environment. . NASA Chief Engineer Terry Onsager said: "In order to realize Mars exploration, both the probe and human beings are facing severe challenges in the heliosphere environment, and we are ready." The threat of radiation environment First of all, the safety of deep space probes in orbit is facing the threat of radiation environment. The main radiation sources are high-energy particles of the sun and galactic cosmic rays. The moon is about 400,000 kilometers away from the earth, and the farthest distance between Mars and the earth has reached 400 million kilometers. It took seven months for the deep space probe to reach the first stage of Mars orbit. At this stage, due to the shielding and protection of the earth’s magnetic field and atmosphere, high-energy charged particles (and a small amount of neutral particles) can directly bombard deep space detectors without hindrance, which will cause considerable damage to the normal operation of deep space detectors. Large hazards can cause spacecraft single event effects, displacement damage, etc. Therefore, particle radiation protection is a very important part of the deep space exploration program. In addition, the radiation environment of the fire planet is also very important. The radiation dose on Mars orbit is about 2 to 3 times that of the International Space Station orbit, which will also bring huge pressure on radiation protection for manned Mars exploration in the future. The challenge of the plasma environment Secondly, the measurement and control communication of deep space probes faces the challenge of the deep-space plasma environment. The main source is the solar wind that fills the entire heliosphere. The only connection between the deep space probe and the earth after it is launched is the deep space measurement and control communication system, which is responsible for the transmission of scientific data and remote sensing data, tracking the deep space probe and directing it to perform important tasks. The measurement and control communication relies on the propagation of electromagnetic waves between the probe and the earth. The communication between the earth and the probe on the moon takes only 1.35 seconds. When the radio signal travels from the earth to the Mars probe, the distance It takes up to 22 minutes to communicate once. The solar wind plasma on the line from the detector to the earth has a significant impact on the transmission of electromagnetic waves. Irregular plasma clusters may cause electromagnetic waves to be refracted and reflected, causing electromagnetic wave amplitude flicker, spectrum expansion, phase flicker, etc. Therefore, the solar wind plasma environment, especially the solar eruptive activities such as interplanetary coronal mass ejection, has a very important impact on the measurement and control communication of the deep space exploration program. The influence of solar electromagnetic radiation environment Furthermore, the measurement and control communication of the deep space probe is also affected by the solar electromagnetic radiation environment. Due to the difference in revolution orbits, within 20 days of Mars being near the back of the sun (in the black area as shown in the figure), electromagnetic waves will pass through the strong radiation area near the sun, the communication signal will be blocked or interfered by the sun, and the probe will be "lost" If you encounter a dangerous situation or require precise orbital operations, the earth cannot issue any effective instructions to reach the detector. Everything depends on the detector’s "autonomous intelligence". The moon does not have an atmosphere. The lunar rover’s landing process is to slow down and land on the surface of the moon from an altitude of 15 kilometers in about 12 minutes. The environment that the Mars rover needs to face and land on Mars is much more complicated. The Mars lander needs It decelerated and landed on the surface of Mars from an altitude of 125 kilometers in about 7 minutes. Due to insufficient data to support the simulation of the Martian atmosphere, sand and dust, unknown environmental factors have brought more technical challenges; When nearby, the probe's orbit determination accuracy and velocity measurement error are about 100 kilometers and 1-10m/s respectively (data quoted from 1). With such a large error and the length of communication, landing on Mars obviously requires the probe's autonomous control system. high. Blessing: The Space Environmental Prediction Center of the Chinese Academy of Sciences has undertaken the space environmental protection of my country's space missions, including manned spaceflight, lunar exploration projects, and space science satellites. It is now working closely with user units to provide deep-space environmental prediction and support services related to Mars exploration . The human history of deep space exploration is advancing arduously step by step. Each deep space exploration mission carries the dreams and efforts of generations, overcomes countless difficulties and obstacles, and the precious data obtained has greatly promoted scientific development. Let us bless the three warriors who are still rushing to Mars through the epidemic-my country's "Tianwen No. 1", the United States' Perseverance, and the UAE's Hope! Both the United States and China have announced plans to take samples from Mars and return to Earth by 2030. In the farther future, maybe we can really build a Mars space station and a Mars ecological warehouse, and finally realize the human dream of "immigrant to Mars"! Perhaps the most powerful aspect of physics is actually the most amazing thing in the universe, which is the universality of physical laws and theories.
Some equations can explain the various phenomena at the edge of the universe, and the unfathomable future of the universe during the initial period of the Big Bang. Let us experience how powerful modern physics is. Gravity Game(2021 hermes bags) Einstein's general theory of relativity is our modern theory of how gravity works: matter and energy bend space-time, and the distortion of space-time reveals how matter moves. Mathematical calculations are a bit complicated. This requires a set of 10 interrelated equations to describe all these bending, deformation and movement phenomena. But these equations involve huge amounts of energy. For example: In the weak gravitational limit, Einstein's equation is simplified to the familiar Newtonian gravitational formula, which can be used to explain the trajectory of everything from throwing baseballs to hydroelectric dams. Outside the surface of the earth, Einstein's equation of relativity It is used to provide accurate positioning of the GPS system and accurately predict the orbits of all planets. These are exactly the same equations, without any modification, continue to present the true principles of more things, revealing the existence and operation of black holes, the growth of the largest structure in the universe, the existence of dark matter in galaxies, and the Big Bang phenomenon. All of these come from a set of 10 equations that span the space and time of the universe. In fact, this indicates that the universe was in a "finite age" at the beginning of the universe. nuclear energy When physicists started cracking the nuclear code in the 1940s, they didn’t know that their tireless efforts would reveal one of the most puzzling mysteries in astronomy: how stars work. Before that, scientists have made various attempts to explain the age of the earth revealed by geology and paleontology, using all known physical methods to explain why the sun keeps high and bright. However, these attempts usually end in failure, and even the best explanation can only last a few million years. But nuclear physics is a brand new game. Once physicists discover the conditions required to ignite nuclear fusion (that is, ultra-high pressure, high temperature, and high density), they will realize that this situation is not always artificial (nuclear bomb And reactor), but exists in the universe and nature: in the core area of the star. The nuclear fusion of hydrogen is the way stars provide energy for billions of years, and physicists use equations to understand the process of transforming nuclear reactions into usable energy, from the smallest atoms to the largest stars. Nuclear physics-relative in physics Newer things combine the universe in a surprising way. law of motion But people don’t have to use profound relativity equations or complex nuclear reaction calculations to discover the universality of physics. It can be as simple and straightforward as a car accident. When two vehicles collide, the law of conservation of energy and momentum can be applied: the total energy and momentum before the collision must be equal to the total energy and momentum after the collision. Through simple statements, investigators can reproduce the accident scene and find out which driver it is If something went wrong, what caused the vehicle collision. Cars are not the only things that collide in the universe. There are star collisions, galaxy mergers, gas cloud mixing, etc. To some extent, few astronomy or physics papers do not mention the conservation of energy and momentum. Scientists use these Principles to understand everything in the universe. Why do gas clouds radiate energy? Why does a neutron star change its speed? How to achieve conservation of energy and momentum. What happens when these galaxies collide? How to follow the conservation of energy and momentum in this case. Next time if you are driving a car into a collision, take a moment to think about how the vehicle follows the conservation of momentum at the moment of the collision, and how the principle is applied in the universe, no matter where you are. How habitat size affects the abundance of all species living in a community. Digging deep into this issue can bring ecological insights and is also valuable for formulating strategies to promote biodiversity. Chase et al. reported the results of a study in the journal Nature, which may help to settle the long-standing debate about the relationship between habitat area and the species diversity that the habitat can accommodate.
Land change caused by human activities is a major component of global change. The loss of natural habitat reduced the diversity and abundance of local species, and it was related to the extinction of more than one third of the world’s animals from 1600 to 1992. A report by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services estimates that there are currently more than 500,000 species—about 9% of all terrestrial species— May lack the number of habitats needed for long-term survival. If these species disappear, many key ecosystem services will be damaged, such as pollination or control of pests or pathogenic factors. The impact of habitat loss on biodiversity is usually estimated based on the relationship between area and species richness-first described more than 150 years ago. This seemingly universal relationship is simple: the larger the area of a particular habitat, the more species it contains, but the number of species increases with the area is nonlinear. Due to the limited resources in an area, the number of individuals of ecologically similar species is also limited. Therefore, when a habitat loses part of its area, for many species, it also loses the ability to support a large enough population. As land use intensifies and habitat area decreases, these species will become extinct [8]. Chase and colleagues proposed a concise and neat method to illustrate the dynamics of communities occupying habitat patches of different sizes. The author did not only consider the total number of species in each habitat fragment, but focused on the number and relative abundance of different species in the samples obtained from these fragments. In this way, the structure of ecological communities can be directly compared, while avoiding problems that may arise when taking into account the differences in the work required for sampling large and small areas. The author's method also allows comparison of changes in the relative abundance of individuals of all species, which is a measure of community structure related to ecosystem dynamics. Thanks to this method, Chase et al. can distinguish three possible modes of change due to habitat loss (Figure 1). In the model described by the "passive sampling" model, the structure of the community remains unchanged in large and small habitat fragments. Therefore, regardless of the total area of the habitat, each sample exhibits similar species abundance (richness, number of species), abundance (number of individuals) and evenness (distribution of individuals of different species). In this case, species reduction will reflect the loss of habitat area under the classical species-area theory, and the total number of species in the entire habitat fragment will depend entirely on its size. The other two models are described as the type of ecosystem decline. This hypothesis assumes that in the process of habitat reduction, there will be disproportionately high biological losses compared with the loss of habitat area. There is a kind of ecosystem decline that occurs due to excessive individual losses. Compared with the larger habitat fragments, the smaller habitat fragments contain fewer individuals per sample, and all species are affected the same. This resulted in a community where there were fewer species in the small fragments, but the relative abundance of species in a single sample between the small fragments and the large fragments remained unchanged. Another type of ecosystem decline is caused by uneven changes in the relative abundance of species plus species loss. In this case, the existing species have different responses to habitat loss, so the species abundance in small fragments increases or decreases relative to the species abundance in large fragments. In the samples from small habitat fragments, the relative abundance of species has become more uneven, because some species have an increased quantitative advantage, making communities poor and causing species poverty. Chase et al. used data from about 120 human-reformed landscapes around the world to show that, in general, samples from small natural habitat fragments contain fewer individuals and fewer species than samples from large fragments. The species abundance is more uneven. This result is consistent with the generalized model of ecosystem aging—mainly because of the decrease in uniformity (see Figure 1). This result is valid regardless of the habitat or organism being studied. This means that changes in natural habitats will cause major functional changes in ecosystem dynamics, not just the loss of populations and species. Therefore, the current estimation of species extinctions related to habitat loss based on passive sampling models may not only underestimate the number of threatened or disappeared species, but also underestimate the impact of loss of these species on ecological functions and ecosystem service provision . Changes in biodiversity caused by habitat loss will change many ecological processes, eventually causing catastrophic effects and accelerating the process of extinction. However, local extinctions often do not happen immediately. Some species will continue to exist, only with a decline in abundance and population dynamics-known as "extinction debt"-until the final individual perishes. This can lead to uneven distribution of species abundance, which is vividly demonstrated by the method of Chase and colleagues. Their analysis revealed a small number of "2021 hermes bags" species and a large number of rare species. The former dominate communities in small habitats, and many of the latter may be going extinct. The declining species may be replaced by other species from the adjacent human modified landscape, especially at the edge of the habitat, resulting in the so-called "edge effect", which is relatively more important in small habitat fragments. In fact, in the early stages of land reconstruction, the communities in small fragments are more different from the original communities than the communities in large fragments. As time goes by, the communities in small fragments and the communities in large fragments grow more and more. The more similar, because they gradually recovered from the impact of land reform. According to research by Chase and colleagues, the diversity and species abundance found between large fragments and small fragments is attenuated in the older or "softer" European landscape than in the newer and more dramatic The transformed North American landscape is smaller. This suggests that over time, species migrating from the margins of human-altered habitats may at least partially compensate for the ecological functions performed by native species in larger habitats, thereby allowing small habitat fragments to reach new - but not the same The-ecological balance. Although this work emphasizes the key role of habitat area in maintaining ecosystems, it rarely discusses how these processes change due to habitat loss. Species from higher trophic levels (upper food chain) (such as predators) require larger habitats to maintain their populations than species from lower trophic levels, so the number of individuals supported by smaller habitat fragments may be It is not enough to maintain the population of top predators or consumers, so it will produce a shorter food chain and change the structure of the ecosystem. The difference in extinction rates between trophic levels will cause significant changes in ecosystem functions at the edge of habitat. As the area of natural habitats shrinks, this will endanger ecosystem functions and the provision of ecosystem services. Chase and colleagues’ research results require reconsideration of the controversy: whether a single large conservation area will protect more species than several small conservation areas (replica hermes). Some current evidence suggests that a continuous habitat may contain fewer species than many small habitats with the same total area. However, compared with a single large-scale protected area, these small habitats may undergo huge ecological changes, which may eventually lead to a large-scale reduction of ecosystem functions, and in the long run, increase the extinction rate of native species. The method of Chase and colleagues provides a good overview of the extent of these effects, but more details are needed to understand exactly how local ecological processes change. This requires going beyond the study of the nutrient chain, evaluating more complex food webs, and collecting information on changes in species functional responses and trait diversity in ever-shrinking habitats. Ultimately, this information will reveal which ecological processes are declining, and how this decline in ecosystems affects the maintenance of normal biodiversity. Not long ago, 33 major research institutions around the world submitted their outlook for the sea ice coverage in September this year.
Among them, the value submitted by the State Key Laboratory of Numerical Modeling of Atmospheric Sciences and Geophysical Fluid Dynamics (LASG) of the Institute of Atmospheric Physics, Chinese Academy of Sciences shows that the sea ice area will be reduced to 3.8 million square kilometers in September, which will be the sea ice area since the observation record The second smallest value is only greater than the record of 3.57 million square kilometers in 2012. The international community is very concerned about when the Arctic summer will be ice-free, that is, the area of sea ice in the Arctic Ocean is less than 1 million square kilometers. Ren Hongli, a researcher at the Chinese Academy of Meteorological Sciences, told the Chinese Journal of Science that this time node may appear in the next 20 to 30 years. However, scientists still dispute when the Arctic will appear ice-free in summer. According to satellite observation data, the coverage of Arctic sea ice is the largest in March and the smallest in September. Not long ago, 33 major research institutions around the world submitted their September Arctic sea ice area outlook, including 16 numerical model prediction results, 14 statistical prediction results, and 3 qualitative analysis results, but the results submitted by each institution are quite different. According to Wei Ke, an associate researcher at the Institute of Atmospheric Physics, Chinese Academy of Sciences, the predicted value submitted by the University of Washington is the smallest, only 3.2 million square kilometers; the second is the predicted value submitted by the Geophysical Fluid Dynamics Laboratory of the National Oceanic and Atmospheric Administration, which is 3.5 million square kilometers; the predicted value submitted by LASG is 3.8 million square kilometers, and these three sets of data are all less than 4 million square kilometers, far below the median range of all models. The forecasts of the ESPC system of the US Naval Research Laboratory, METNO SPARSE of Norway, and APPLICATE CNRM of France are all significantly higher than the median range of all models. For example, the forecast value of the ESPC system is 6.2 million square kilometers. In this regard, Liu Jiping, a professor at the State University of New York in the United States, pointed out that the Siberian heat wave this spring triggered the early retreat of sea ice along the coast of Russia, resulting in a very small range of sea ice in the Laptev Sea and the Barents Sea. At the same time, this summer, the Arctic experienced abnormally high temperatures. For example, on June 20, the highest temperature in the daytime in the city of Verkhoyansk, Siberia, Russia reached 38°C. This is a rare value, because the highest temperature here during the same period in history was only 22°C. It cannot be ruled out that the Arctic sea ice area in September this year tended to be "below 4 million square kilometers." The "amplifier" of global climate change The area of polar sea ice plays a vital role in the earth system. Sea ice reflects up to 80% of the incident sunlight and plays a role in cooling. The size of the sea ice area modulates the amount of incident sunlight entering the earth system. When the temperature rises due to climate change, the melting of sea ice intensifies, and the incident sunlight in the extreme day can enter the ocean more. The sea absorbs more heat and the temperature rises faster, which in turn leads to a larger-scale sea ice melting. This is a positive feedback process that strengthens each other and acts as an "amplifier" for global climate change. In the process of global warming, the Arctic region's warming rate can reach more than twice the global average. This is called "Arctic amplification" phenomenon, which will exacerbate global warming and the melting of Arctic sea ice. "The area of Arctic sea ice has been reduced by 40% compared to the 1970s, and the total ice volume has been drastically reduced by 70%." Wei Ke said. On September 25, 2019, the United Nations Intergovernmental Panel on Climate Change issued the "Special Report on the Ocean and Cryosphere in Climate Change", which assessed the latest changes, impacts and adaptation strategies of the ocean and cryosphere, reflecting current science The world’s latest understanding of the ocean and cryosphere. The report pointed out that the global ocean and cryosphere changes are accelerating. During the period from 1979 to 2018 when satellite observations were made, the sea ice extent of the Arctic in September decreased rapidly at a rate of about 12.8% every 10 years. The sea ice extent at this stage is at least 1,000 The smallest value in the year. The Arctic sea ice is also continuously thinning. Between 1979 and 2018, the area of thick ice over 5 years has decreased by about 90%. As an "indicator and amplifier" of global climate change, the health and stability of the global cryosphere is the cornerstone of the stability of the climate system. Its rapid ablation will inevitably have a profound impact on the ecosystem, coastline stability and human settlements in alpine regions, and will also further modulate it The global climate system affects the intensity and frequency of extreme events. For example, polar bears, walruses and whales rely on the presence of sea ice to maintain their hunting, breeding and migration habits. A substantial reduction in sea ice will have a profound impact on the ecosystem of the Arctic. "The abnormal changes in the range and area of the Arctic sea ice will profoundly affect the safety of navigation and the ecosystem around the Arctic, and will have an important impact on the evolution of the atmospheric circulation in the middle and low latitudes and the occurrence of extreme weather and climate events." Ren Hongli pointed out. Arctic sea ice prediction is not easy Arctic sea ice prediction can provide scientific basis and reference for global climate change monitoring, Arctic waterway utilization, Arctic resource development, and Arctic environmental assessment. "Now that the prediction of Arctic sea ice is becoming more and more important, we need methods and technologies that can accurately predict Arctic sea ice. However, the prediction of sea ice is still very difficult." Liu Jiping emphasized. He said that in order to make a good prediction, a very complex numerical model needs to be established. This model includes the atmosphere, sea ice, and ocean. The interaction between them must be considered, and a series of very complex mathematical and physical equations must be solved before proceeding. prediction. "We mainly use models to predict Arctic sea ice. The China Meteorological Administration's China Multi-Model Ensemble Prediction System (hermes outlet) can release the Arctic and Arctic sea ice prediction products every month." Ren Hongli said. Another method is satellite remote sensing. Liu Jiping pointed out that only high-resolution satellites can see the morphological evolution of sea ice and provide short-term sea ice monitoring. But high-resolution satellite remote sensing faces great challenges in the polar regions. The monitoring of sea ice covered by clouds greatly compromises the effect of high-resolution visible light and near-infrared satellite remote sensing; microwave satellite remote sensing can penetrate clouds, but the resolution is low. Weike said that formulating reasonable policies and measures to carry out long-term emission reductions and short-term climate change adaptation is what it means to deal with the impacts and risks of climate change. According to foreign media(2021 hermes bags) reports, if the weather corresponds to your emotions, then the climate corresponds to your personality. Scientists often use this metaphor to explain the difference between "weather" and "climate".
In other words, weather is a short-term concept that describes the state of the atmosphere in a certain area within a limited period of time (in minutes, hours, days, or weeks), while climate describes the average weather over a long period of time trend. If you are interested in climate, it is best to learn some knowledge of geography: global climate is composed of regional climate. If you further decompose on this basis, you will find that as long as it is on a perceptible scale, there are certain differences in the climate at each location. This introduces the concept of "microclimate". This research theme has important implications for agriculture, conservation, wildlife management, and urban planning. Scale is important The climate is a bit like a tapestry. The overall picture is important, but the seemingly small details are also crucial. Environmental scientists define the term "microclimate" as "a collection of climatic conditions (temperature, rainfall, humidity, solar radiation) in a local area, usually close to the ground, and its spatial scale is directly related to ecological processes." For the second half of this definition, we will talk about it later. Point out to some researchers that, by definition, the microclimate must be different from the climate of the surrounding area. The picture shows the Santa Monica Mountains in California. It can be seen that the north and south slopes are covered by different plants. The picture shows the Santa Monica Mountains in California. It can be seen that the north and south slopes are covered by different plants. Forest provides us with some excellent examples. In tropical rainforests, the climate close to the ground is completely different from the climate at the canopy 50 meters above. It is precisely because of this vertical difference and other factors that tropical regions can achieve such amazing biodiversity. Similarly, during a partial solar eclipse in 2015, scientists observed that the temperature change in a grassland in Eastern Europe was more significant than that in nearby forests. This is because not only can trees provide shade, the leaves also reflect solar radiation, and forests can often slow down wind speed. A 2019 study of 98 forests distributed on five continents found that the temperature in the forest is 4 degrees Celsius lower than the surrounding area on average. People who are afraid of cold don't have to worry. The study also pointed out that in winter, the temperature in the forest is 1 degree Celsius higher than the outside environment on average, which is quite comfortable and pleasant. the life of bugs To what extent is the scope of the microenvironment so large that it cannot be described by the word "micro"? In other words, is there an upper limit on the scale of the microenvironment? Different scientists have different answers to this. On a horizontal scale, some people will define a ‘microenvironment’ as any environment within an area less than 100 meters in diameter. If you want to understand the effect of temperature on the photosynthesis of a certain leaf, you should measure the temperature on a centimeter scale. If you want to understand the effect of temperature on the habitat selection of a certain large mammal, it is best to measure tens to several. The temperature difference within 100 meters. For example, a plant that grows alone can produce a very small microenvironment. A single corn plant can create its own microenvironment by shading and changing the characteristics of the soil near the plant. If it is a corn field, the micro-environment range generated will be larger, extending to the entire field. Many creatures will use these small microenvironments to survive, such as aphids and red spiders. Compared with the leaves they feed on, these creatures appear extremely small. Each leaf has its own microenvironment. Observations show that aphids prefer cooler leaves, while other invertebrates prefer warm leaves. Since the bodies of these organisms are unable to generate heat by themselves, the microenvironment of the leaves plays a vital role in their survival. Microenvironment on a large scale The urban heat island effect is an excellent example of microenvironment. The urban heat island effect is an excellent example of microenvironment. From a macro perspective, the earth is going through a difficult period. The global temperature continues to rise, and nine of the 10 hottest years in history have occurred since 2005. In addition, a recent estimate shows that there are about 1 million species in the world that are on the brink of extinction due to human activities. Ecologists and environmental scientists are working hard to answer a key question: How will each species and the entire ecosystem respond in the face of rapid climate change and rapid habitat loss? The microenvironment is a key part of this research. If we do not measure and understand the environment on the right scale, it will be much more difficult to predict future changes. Developers have long been aware of the impact of small-scale climate on people's daily lives. The urban heat island effect is one such example. The water vapor emitted by plants can regulate the local climate. But in cities, natural vegetation is often scarce. Not only that, many pavements and buildings are also very good at absorbing or re-emitting heat from the sun. Motor vehicle emissions make the situation worse. However, big cities are not just a simple heating plate. The temperature difference recorded in the same city can sometimes be as high as 8.3 to 11.1 degrees Celsius. In this regard, the parks and trees in the city come in handy. They have a good cooling effect on the surrounding environment. Several cities around the world have formulated plans to increase urban green space. Tree planting projects and rooftop green planting projects can lower the city’s surface temperature, reduce air pollution, and reduce surface runoff. When we look into the depths of the universe, what we see is not today's celestial bodies, but what they look like when the light that reaches the earth is emitted. The closest star to us is Proxima Centauri, which is about 4.24 light years away; in other words, what we see now is the light it emitted 4.24 years ago. However, for more distant stars, when we look back at them, we must also consider the expansion of the universe. Moreover, these stars were formed a long time ago. For example, Proxima Centauri was born 4.85 billion years ago, which is older than the sun.
How can we integrate the existing data to determine the age of all stars in the universe? We know that the universe has a history of 13.8 billion years, and the observable universe spans about 46.5 billion light years. So, what is the relationship between these two? When we observe a star, we can know its distance from us, but how do we know its age? This is a very good question. To answer this question, we need to put together two very different kinds of information. Below we will understand how astronomers do it. When we observe stars in very nearby universes, such as the Milky Way or many nearby galaxies, we can measure the properties of individual stars. Not only that, one of the attributes—the current distance of the star from the earth—is actually the same as the travel time of starlight. In other words, a star that is 4.24 light years away from us like Proxima Centauri has its light reaching our eyes after a full 4.24 years of space travel. However, these two pieces(hermes outlet) of information only apply to stars in the relatively nearby universe. When the observation distance is getting farther and farther, we can no longer distinguish the various properties of the stars one by one, because the line of sight of the telescope is leaving the local supercluster (also known as the Virgo supercluster, which includes the Milky Way and the Andromeda Galaxy Before the local group of galaxies, its resolution has gradually decreased. In addition, once we leave the local galaxy group, we must consider the expansion of the spatial structure itself, not only the extension of the light wavelength (causing red shift) but also the distance (in light years) of the object to be observed and the light travel time of the object ( In years). The first thing we need to understand is that when we look up at distant objects in the universe, we are actually looking back at the past. What is certain is that if you are observing stars that are several light years away, or even thousands or tens of thousands of light years away, it takes about the same number of "years" for their starlight to reach your eyes. But if you are observing galaxies tens of millions of light-years away, the expansion of the universe begins to have a huge impact. The reason is this: once the light leaves the light source, it will spread in all directions. Among them, the light that travels along the line of sight will eventually reach your eyes (to be precise the lens of the telescope), but before that, it must pass through all the space between you and the light source. It's a bit like putting some raisins in fermented bread; when the bread rises, the dough will expand and the raisins will move further apart. Stars that are relatively close at the beginning will only expand a little bit, while those that are far away at the beginning may become farther away as they propagate signals (such as light) to complete their journey. The fact that the universe is expanding means that the longer it takes for a star’s light to reach the earth, the greater the contradiction between its propagation time and our current distance from the star (in light years). Scientists already know the composition of the universe (ordinary matter, dark matter, and dark energy) and how fast the universe is expanding today. Therefore, we can perform the necessary calculations to determine how the universe has expanded throughout its history. This is a very powerful technique because it changes very little. In today's universe, as long as it is governed by general relativity, there is a clear relationship between the composition of the universe and its expansion speed over time. Scientists can measure the distances of various cosmic objects and their redshifts with unprecedented accuracy to determine this relationship, and confirm it in subsequent measurements of the cosmic microwave background and large-scale structure. This technology also means that when we observe an object in the universe, we can not only calculate how far back in time is, but also know how far the object is now from us. To give a few examples: •When the light of an object takes 100 million years to reach the earth, it means that we are seeing an object that is currently 101 million light years away from us; •When the light of an object takes 1 billion years to reach the earth, the object is now about 1.035 billion light years away from us; •If it takes 3 billion years for light to reach the earth, it means that this object is now about 3.346 billion light years away from us; •The light that arrives on the earth after 7 billion years comes from an object 9.28 billion light-years away from us; •The light that takes 10 billion years to reach the earth corresponds to an object that is 15.8 billion light years away; •The light that takes 12 billion years to reach the earth comes from an object about 22.6 billion light-years away. •The light from the most distant object detected so far, the GN-z11 galaxy, reached the lens of the Hubble Space Telescope after 13.4 billion years and is now about 32.1 billion light-years away. When measuring a distant object, we usually directly measure its brightness and its spectral redshift value, which is sufficient to determine its current distance and light travel time. When we measure light from objects that are 32.1 billion light-years away, we see light from 13.4 billion years ago, which was emitted 407 million years after the Big Bang. However, this is not enough to tell us the age of the stars in the galaxy; it can only tell us the age of light. In order to know the age of the star that produces this distant light, it is ideal to measure the exact properties of individual stars. We can do this with the stars in the Milky Way. With the highest resolution telescope, we can identify individual stars 50 million or 60 million light-years away. Unfortunately, this distance is only 0.1% between us and the edge of the observable universe; beyond this distance, we can no longer resolve individual stars. If we can measure a single star, we can construct the so-called color-magnitude diagram in astronomy: we can plot the relationship between the intrinsic brightness of a star and its color/temperature. This is very useful. When the stars first formed, their color-magnitude diagrams roughly appeared as a winding diagonal line, with the brightest stars also being the bluest and hottest, while the darkest stars were redder and colder. The youngest group of stars is a combination of stars of different colors/brightness. But as stars age, the hottest, bluest, and brightest stars consume fuel the fastest and begin to die out. They will eventually evolve into red giants and/or supergiants, but this means that the number of stars begins to evolve as the star ages. As long as we can distinguish individual stars in open star clusters, globular star clusters, or even nearby galaxies outside the Milky Way, we can accurately determine the age of a stellar population. Star family refers to a collection of stars in a galaxy whose age, chemical composition, spatial distribution and motion characteristics are relatively close. When you combine these data with information about the age of the received light, you can finally get the age of the star. However, what should we do when we can no longer observe individual stars in a galaxy? Is there any way to estimate the age of the stars inside the galaxy based on the observed light, even if we cannot distinguish the stars themselves? In fact, we can use an agent to obtain unobtainable information, but we need to sacrifice some accuracy when translating the age of stars inside the galaxy. When observing a distant object, such as a galaxy that cannot be resolved (or barely resolved), we can still measure the total starlight from all the stars in it. We can decompose this light into different wavelengths and determine how much of this light is ultraviolet light, blue light, green light, yellow light, infrared light, and so on. In other words, as long as the color of a distant galaxy is accurately measured, we can estimate the time of its most recent star formation and thus the age of the stars inside. However, since we must make these estimates, uncertainty is introduced. A galaxy that has undergone multiple star formation over hundreds of millions of years, and a galaxy that has only undergone an important merger and then formed all the stars at the same time, may present a completely different picture. For galaxies with extremely deep blue, the error may be as small as tens of millions of years, while for galaxies lacking young blue stars, the error may be as large as 1 billion to 2 billion years. Scientists can also apply other methods, such as measuring surface brightness fluctuations (this depends on variable stars, which are stars whose brightness fluctuates from the earth, and variable stars depend on the age of stars inside galaxies), but most methods are beyond It fails after a certain distance. However, if we can obtain spectroscopy measurement data instead of just measuring brightness through various color channels (that is, through luminosity), then we can get a little better result. By measuring the intensity of various atomic and molecular transitions by absorption lines and emission lines, we can determine the location of a stellar population based on the age since the most recent star formation explosion. To summarize, if you want to know the age of the star you are observing, you need to know two things: 1. You need to know how old the light you see is, which means you need to know how far this object is from the earth in the expanding universe; 2. You need to know the age of the star itself, starting from the moment you collect the starlight. When you can distinguish a single star, this is a very simple problem, but scientists can only distinguish a single star that is 50 to 60 million light years away. In contrast, the observable universe extends about 46 billion light-years in all directions, which means that we cannot use this method for the vast majority of stars in the universe. We can only use some indirect method, such as age estimation based on the color of the galaxy itself, but this will bring additional uncertainty. With a deeper understanding of stars and stellar evolution, as well as advanced instruments and telescopes that may be used in the near future, scientists are expected to learn more precisely about the most distant and oldest objects. The source of earth water has always been a hot spot in earth science and planetary science. The hydrogen isotopic composition is the most important basis for tracing the source of the earth's water. Existing research results show that there are huge differences in the hydrogen isotope composition of celestial bodies in the solar system: the sun, Jupiter, and Saturn have similar hydrogen isotope compositions (δD approximately -865‰), and are the same as the hydrogen isotope composition of interstellar gas. It is considered to be the initial value of the solar nebula; compared with the sun, terrestrial planets, chondrites and comets have significantly higher hydrogen isotopic compositions, and the difference is obvious, such as the earth’s ocean water (δD=0‰), carbon and Ordinary chondrites (δD=-220-1600‰, except R type), comets (δD=~300‰, except 103P/Hartley 2).
Based on the hydrogen isotope composition, many studies have suggested that carbonaceous chondrites and comets are the main sources of earth’s water (Morbidelli et al., 2000; Hartogh et al., 2011; Marty, 2012), but they cannot explain the earth. There is a clear difference in hydrogen isotopes between water and carbonaceous chondrite and comet water. In recent years, more and more high-precision isotope analyses have shown that enstatite chondrites (EC) are almost identical to the earth in terms of O, Cr, Ti, and Ca isotopic compositions. May be the main material for building the earth. Enstatite chondrites (Figure 1) are formed in a very reducing environment, in which Na and K elements can be produced in the form of sulfides, so enstatite chondrites are generally believed to be formed near the sun. (figure 2). From the perspective of nebula evolution, it is impossible for hydrogen near the sun to combine with minerals in the form of hydroxyl groups or water molecules, but there are still reports that water-containing minerals can be found in enstatite chondrites, such as Djerfisherite (Fuchs, 1966) . Recently, Dr. Laurette Piani of the University of Lorraine, France, performed water content and hydrogen isotope analysis on 13 enstatite chondrites (type 3-6) with different degrees of thermal metamorphism. At the same time, they also analyzed a piece of enstatite The heated product of pyroxene chondrite-enstatite achondrite Aubrite. The analysis results show that the whole rock water content of the enstatite chondrite is 0.08-0.54 wt%, while the water content of the enstatite achondrite is 0.3 ± 0.2 wt%, which is significantly lower than the water-rich carbonaceous chondrite Meteorite (7.2-9.1wt%). The average hydrogen isotope values of EH3 and EH4 (δD = -103±3‰) are lower than the current Earth's ocean water, and the hydrogen isotope compositions of EH5, EH6 and Aubrite are lower (δD = -127±15 ‰) (Figure 3). At the same time, the in-situ water content and H isotope analysis of the glass components in Sahara 97096 pellets were carried out using ion probes. The analysis results show that the water content in the glass composition is 2700-12300 ppm, and the hydrogen isotope ratio is uniform (δD = -147±16 ‰). Since there is no evidence of water alteration in Sahara 97096, it can be considered that the matrix of the pellet has not been disturbed by events such as later water alteration. Statistical results show that the matrix water content of the pellets accounts for about 13% of the total rock water, and the organic water only accounts for 7.7%. Where does the remaining 80% of the water come from? Is it from the main constituent mineral-enstatite (belonging to low calcium pyroxene)? Previous studies have shown that the water content of pyroxene on the S-type small celestial body Itokawa can reach 700-1000 ppm, and the water content of pyroxene in ordinary chondrite (hermes outlet) Larkman Nunatak 12036 can reach 600-1300 ppm (Jin and Bose, 2019) ). The water content of enstatite in Aubrite can reach 5300 ppm, combined with the mode content of enstatite in EC (50 vol%), it is estimated that the water content of enstatite accounts for 15% of the whole rock water (based on OC) Or 58% (based on Aubrite). The study shows that the earth's water can be provided entirely by enstatite chondrites. Because the hydrogen isotopic composition of some water-rich CM-type carbonaceous chondrites also falls within the range of the mantle, in order to further confirm that enstatite chondrites are the source of earth's water, additional isotopic indexes are needed to distinguish them. The hydrogen-nitrogen isotopic composition is a very good indicator. The analysis results show that only the hydrogen-nitrogen isotopic composition of enstatite chondrites is within the range of the mantle rock (Figure 4), so it can be considered as enstatite chondrites. Meteorites not only provide water, but are also the main material that constructs the earth, which is consistent with the results of high-precision isotope analysis. Piani et al. applied the analytical data to the theoretical model of the formation of the earth, and found that substances similar to enstatite chondrites can contribute 3.4-23.1 times the earth’s ocean water, glass components and organic matter can contribute 3-4 Times the ocean water, which is consistent with the estimate of mantle water content. In 1846, French astronomer Urbain Le Verrier (Urbain Le Verrier) used mathematical calculations to determine the exact location of a planet. Soon, Neptune-the last known planet in the solar system-was observed under Levier's guidance.
And mankind will find a new planet again, it will have to wait until 1992. This time, the human vision rushed out of the solar system. In Virgo, 2800 light years away, radio astronomers Aleksander Wolszczan and Dale Frail discovered that there are two planets orbiting around the pulsar PSR 1257+12. It is also the first time a planet has been found outside the solar system. From Neptune to the first exoplanet (exoplanet for short), people have been searching for nearly a century and a half; and from the latter to finding the first exoplanet, it may take us less than 30 years. A recent paper published on the preprint platform arXiv announced: Using data from the Chandra X-ray telescope, researchers have discovered a planet slightly smaller than Saturn in a spiral galaxy 28 million light years away. Search in the Milky Way After the first confirmation of the existence of exoplanets in 1992, astronomers' search for exoplanets has never slowed down. So far, 4,348 exoplanets have been confirmed to be found in 3,213 star systems, and there are more than 5,000 to be confirmed. Among them, the transit method and the radial velocity method are the main methods to search for exoplanets. The Transit Method is based on the scenario: when a planet passes over the surface of a star, the star will naturally become dim in the eyes of earth observers because the planet blocks part of the light emitted by the star. Of course, the principle seems simple, but actual observation is not easy. Because the volume of stars often far exceeds that of their planets, the decline in star brightness is minimal. For example, when an earth-like planet passes by a sun-like star, the brightness of the star will only drop by about one ten thousandth. Therefore, only a sufficiently precise detector can capture such faint changes. It is the Kepler Space Telescope that was launched in 2009 that can achieve this type of observation. During the observation lifetime of nearly 10 years, the Kepler telescope discovered 2662 exoplanets. After its decommissioning, the Transiting Exoplanet Survey Satellite (TESS) became the successor, continuing to search for exoplanets in a wider field of view and at a greater distance. Another important observation method is the radial velocity method. If there are planets orbiting around a star, the gravitational force of the planet will cause the speed of the star to move away from or approach us to change. According to the Doppler effect, we can find such changes in the spectrum of stars. Although the signal is also weak, astronomers have already gained something through this method. In 1995, Michel Mayor and Didier Queloz used this strategy to detect the first exoplanet orbiting a sun-like star. Won the Nobel Prize in Physics last year. However, the nearly 10,000 planets (and candidate celestial bodies) found so far are all located in the Milky Way Galaxy. When these methods that have achieved fruitful results for us in the galaxy are applied outside the galaxy, they all become powerless. This dilemma is not difficult to understand: whether it is the transit method or the radial velocity method, the magnitude of these changes is inherently weak, and it is more difficult to observe when placed in other galaxies farther away. X-ray transit method(Hermes Constance 24cm For Sale) In this research led by astronomer R. Di Stefano of the Harvard Smithsonian Center for Astrophysics, they considered another type of signal. The principle of this observation is actually similar to that of the Kepler telescope, both of which are transiting-but the signal source is changed from visible light to a bright X-ray source. Outside the Milky Way, bright X-ray sources mainly originate from the X-ray binary star system. This type of system consists of an ordinary star and the remains of a massive star (such as a black hole or a neutron star). The latter's huge gravitational force can accrete the material of the companion star, and in this process, the accretion disk will release X-rays. One reason why such X-ray signals can be used to search for planets outside the Milky Way is that they contain huge energy (for example, the brightness of this signal is about 1 million times the sum of the sun's various bands); more importantly, in X In the phenomenon of ray transit, the brightness of stars changes very obviously. In a normal transit, the entire star emits radiation, so the passing planet can only block a small part of the light. In contrast, the emission area of X-rays is concentrated in a small accretion disk. When a planet passes by, it can even completely block the X-rays. At this time, the X-ray source has experienced a "total solar eclipse." With this strategy, the research team used data from the most advanced contemporary X-ray telescope, the Chandra X-ray Astronomical Telescope, and found an expectation in the Whirlpool Galaxy, which is more than 28 million light-years away. Long signal. This group of signals belongs to a binary star system named M51-ULS-1: During the duration of 3 hours, the X-ray brightness is shown in the figure below, showing a U-shaped curve, which is the characteristic of the transit phenomenon. In addition, the X-ray signal completely disappeared within 20-30 minutes. Of course, researchers are also aware that in addition to passing planets, there are other factors that may cause similar brightness changes. For example, the accretion process itself is disturbed, resulting in changes in brightness; changes in the nature of the X-ray binary stars cause the X-ray source to be turned off for a period of time; or, the X-ray source is not a planet, but a smaller size Stars (Hermes Constance 18cm Bag On Sale) However, based on the characteristics of the brightness curve and other astronomical limitations, these options were excluded one by one by the researchers. Therefore, the greatest possibility has surfaced: a planet (named M51-ULS-1b) is orbiting this binary star system in an orbit with a radius of billions of kilometers. According to calculations, the volume of M51-ULS-1b is slightly smaller than that of Saturn. It is worth mentioning that this can be said to be a discovery nearly 8 years late. This signal was captured by the Chandra telescope as early as 2012, but it was buried in a large amount of data and received no special attention. It was not until Di Stefano and others began to study the planets outside the Milky Way that its significance became apparent. However, it is still too early to say that we have found an extragalactic planet. At present, this paper has just been uploaded to the preprint website, and it has not yet been formally published after peer review. If finally confirmed, this will be an important progress in our understanding of the universe, and the scope of our search for planets will also be greatly expanded. At the same time, this also provides new ideas for finding terrestrial planets in the Milky Way. According to foreign media(hermes outlet) reports, neutron stars are probably one of the strangest celestial bodies in the universe. They were born when huge stars died. They have both extremely strong gravitational force and extremely high temperature and density, far exceeding any matter that we create in the laboratory.
Although we have known neutron stars for more than half a century, astrophysicists still don’t know how big they are. There are two unsolved mysteries of neutron stars: What is the center of it? How big can their volume grow? We know that the volume of neutron stars is relatively small. Researchers estimate that the radius of a neutron star with a mass about 1.4 times the sun is between 8 and 16 kilometers. In contrast, the radius of the sun is about 696,000 kilometers. Looking through our telescope, even ordinary stars are too small, but just a spot of light. Therefore, it is impossible to directly measure the volume of neutron stars. However, astrophysicists are very good at indirect measurements. In the current research, they combine a variety of electromagnetic observation (light-based) methods, as well as laboratory analysis and theoretical models. Although the calculated radius is relatively large (as if the height of a human being is between 1.2 meters and 2.4 meters), all calculation results and theoretical speculations on the structure of neutron stars fall within this range. But can astrophysicists go further? The answer may be yes, because there are now more research tools to help: such as the gravitational wave observatory LIGO and Virgo, and the neutron star internal composition detector (NICER). Among them, NICER is an X-ray observer located on the International Space Station, dedicated to studying the structure of neutron stars. "We have combined gravitational wave observations and electromagnetic wave observations, using a variety of different technologies." Anna Watts, a neutron star astrophysicist at the University of Amsterdam and a participant in the NICER project, said, "This is a Very interesting area." In a study published earlier this year, researchers integrated gravitational wave observations, electromagnetic wave observations, and nuclear physics technology for the colliding binary neutron star system GW170817 (first observed in GW170817 hermes Azap). The study found that a neutron star with a mass equivalent to 1.4 times the sun has a radius between 10.4 and 11.9 kilometers. Compared with the previous estimation results, this is a great improvement. The electromagnetic radiation emitted by GW170817 comes from a "thousand nova", which is the high-energy light produced by the nuclear reaction when neutron stars merge. Astronomers use telescopes to analyze thousands of new stars in the electromagnetic spectrum from gamma rays to radio rays. Each observation provides us with information about different aspects of GW170817. "When two neutron stars merge, they will emit a lot of matter before they merge. This has something to do with what celestial bodies will be formed after they collide." Astrophysicist Stephanie M. Brown (Stephanie M. Brown) of the Max Plack Institute of Gravitational Physics Brown) pointed out. Based on the light emitted by the ejected material, the characteristics of gravitational waves, and the results of nuclear physics calculations, the radius calculated by Brown and co-researchers is consistent with other independent calculation results. Because the neutron star is too complicated, we have to master a lot of data. According to the current understanding of neutron stars, when a large star becomes a supernova, its core will collapse under the action of gravity, and the material in it will be sharply compressed until the nucleus is compressed into a mixture of nuclear particles. These particles are mainly neutrons, but there may also be protons and even quarks. "Neutron stars may have many different compositions and different inter-particle forces. You can put forward a variety of interesting theories about these." Watts pointed out, "You can use a variety of observation methods for different neutron stars and use many Different observation techniques to cross-validate these theories." The density and pressure inside a neutron star will continue to increase with depth, which can be divided into two or more regions, similar to the Earth’s mantle and molten core. The mathematical description of the internal state is called the "state equation", which links mass and radius together, and can determine the maximum mass of a neutron star. Astrophysicists have not yet come up with a complete equation of state, but they are not ignorant. The size of a neutron star is completely determined by gravity and nuclear force, while the size of an ordinary star such as the sun will continue to change during a lifetime. Under normal circumstances, neutron stars are perfectly spherical, otherwise they would release detectable gravitational waves when they rotate. However, when a collision like GW170817 occurs, the strong gravity between the two neutron stars will deform them. This phenomenon is called tidal deformation, and it is also a property determined by the equation of state. Although it is impossible to reproduce the super density and pressure inside a neutron star in the laboratory, astrophysicists can deduce the interaction between related nuclear particles from low-density nuclear experiments. Coupled with a powerful theoretical tool-hand-scaled effective field model, these experimental results successfully determined the boundary conditions of the equation of state. "You must first observe the gravitational waves formed by the binary neutron star system, and then use Bayesian parameter estimation to obtain the radius, mass, rotation, and tidal deformation of the neutron star." Brown pointed out. Using this method, the most accurate estimation result of the radius of a neutron star is obtained for a given mass. In scientific research, it is far from enough to draw conclusions based on a set of systems. But so far, nature has not provided us with the second neutron star collision event that not only generated gravitational waves, but also released thousands of nova signals. Fortunately, the NICER detector does not need a neutron star collision, or even a dual neutron star system. It can measure the X-ray fluctuations and spectral lines emitted by the neutron star system, including fast-rotating pulsars, which generate dense beams, which look like regular flashes of light through a telescope. These flashes may be produced when matter falls on the surface of a neutron star, which may provide us with information related to the radius of the neutron star. Flashes may also appear in binary star systems that are far away and will not collide temporarily, such as the Hulse-Taylor double pulsar that revealed the existence of gravitational waves to the world for the first time. NICER's detection result of GW170817 is not completely consistent with Brown's team's research conclusion. Due to the uncertainty in NICER's data, this is not a big problem, but Brown and Watts both believe that it is best to further study the reasons for the difference. "If NICER's results are consistent with ours, that would be great." Brown pointed out. She believes that the difference between the two studies is similar to the calculation of the expansion rate of the universe, which is also divided in cosmological circles. At the same time, Watts suspected that these differences may be related to the observation of Thousand Novas. It is not that these observations are wrong, but that there may be some unknown systemic problems, that is, different understandings of model deviations, which may affect our analysis of the original data, and then affect our extraction from complex systems. Measurement results. "You have to be very careful, because what you end up inferring may not be what you put forward at the beginning." Watts said, "Finally, if you want to put together various measurement results, you need to fully understand the state. The nature of the equation." The mission of the NICER detector has just begun. Both Watts and Brown will continue to monitor whether there are new results. Interestingly, astronomers just announced a gravitational wave system in June 2020, which may make the problem more complicated and may help us figure out some things. The system called GW190814 consists of a black hole and an unknown celestial body with a mass 2.6 times the sun. A celestial body with such a light mass is unlikely to be a black hole, and studies on thousands of stars have shown that neutron stars will not grow to such a large size. But Watts pointed out that based on the current NICER detection results, neutron stars with a mass 2.6 times the sun are possible. In this way, the problem of the GW190814 system is easily solved. Regardless of the final truth, astrophysicists have made tremendous progress in measuring extremely small celestial bodies. This is all due to the multi-messenger and cross-professional research methods they use. If we can obtain more observations through NICER and gravitational waves, the mystery of the size and composition of neutron stars may eventually be solved. |
ArchivesCategories |