How Much Is - Nanometer? - Alternative View

Table of contents:

How Much Is - Nanometer? - Alternative View
How Much Is - Nanometer? - Alternative View

Video: How Much Is - Nanometer? - Alternative View

Video: How Much Is - Nanometer? - Alternative View
Video: Scooter - How Much Is The Fish? (Official Video) 2024, May
Anonim

Stepan Lisovsky, PhD student at MIPT, an employee of the Department of Nanometrology and Nanomaterials, talks about the basic principles of nanometrology and the functions of various microscopes and explains why the particle size depends on the way it is measured.

Reference thinking

For a start - about simple metrology. As a discipline, it could have arisen in antiquity, then many argued about the measure - from Pythagoras to Aristotle - but did not arise. Metrology failed to become part of the scientific picture of the world of that time because of the same Aristotle. For many centuries to come, he established the priority of a qualitative description of phenomena over a quantitative one. Everything changed only in Newton's time. The meaning of phenomena "according to Aristotle" ceased to satisfy scientists, and the emphasis shifted - from the semantic part of the description to the syntactic one. Simply put, it was decided to look at the measure and degree of interactions between things, and not try to comprehend their very essence. And it turned out to be much more fruitful. Then came the finest hour of metrology.

The most important task of metrology is to ensure the uniformity of measurements. The main goal is to decouple the measurement result from all particulars: time, place of measurement, from who is measuring and how he decides to do it today. As a result, there should remain only that which always and everywhere, regardless of anything, will belong to things - its objective measure, which belongs to it by virtue of the reality that is common for all. How to get to the thing? Through its interaction with the measuring device. For this, there must be a unified measurement method, as well as a standard, the same for all.

So, we have learned to measure - all that remains is for all other people in the world to measure in the same way as we do. This requires that they all use the same method and use the same standards. People quickly realized the practical benefits of introducing a single system of measures and agreed to start negotiating. The metric system of measurements appeared, which gradually spread to almost the whole world. In Russia, by the way, the merit of introducing metrological support belongs to Dmitry Mendeleev.

Image
Image

The measurement result, in addition to the actual value of the quantity, is also an approach expressed in units of measurement. Thus, a measured meter will never become a Newton, and an ohm will never become a tesla. That is, different quantities imply a different nature of the measurement, but, of course, this is not always the case. A meter of wire turns out to be a meter both in terms of its spatial characteristics, and in terms of conductivity, and in terms of the mass of the substance in it. One quantity is involved in different phenomena, and this greatly facilitates the work of a metrologist. Even energy and mass turned out to be equivalent to a certain extent, therefore the mass of supermassive particles is measured in terms of the energy required to create it.

Promotional video:

In addition to the value of the quantity and the unit of its measurement, there are several more important factors that you need to know about each measurement. All of them are contained in a specific measurement technique chosen for the case we need. Everything is set in it: standard samples, and the accuracy class of instruments, and even the qualifications of researchers. Knowing how to provide all this, based on the methodology, we can carry out correct measurements. Ultimately, the application of the technique gives us guaranteed dimensions of the measurement error, and the entire measurement result is reduced to two numbers: the value and its error, which scientists usually work with.

Measure the invisible

Nanometrology works by almost the same laws. But there are a couple of nuances that cannot be ignored. To understand them, you need to understand the processes of the nanoworld and understand what, in fact, is their feature. In other words, what is so special about nanotechnology.

We must start, of course, with dimensions: one nanometer per meter is about the same as one Chinese in the population of China. This scale (less than 100 nm) allows a whole series of new effects. Here, the effects of quantum physics, including tunneling, and interaction with molecular systems, and biological activity and compatibility, and an overdeveloped surface, the volume of which (more precisely, the near-surface layer) is comparable to the total volume of the nanoobject itself. These properties are a treasure trove of opportunities for the nanotechnologist and at the same time, the curse of the nanometrologist. Why?

The point is that, due to the presence of special effects, nanoobjects require completely new approaches. They cannot be seen optically in the classical sense because of the fundamental limitation on the resolution that can be achieved. Because it is strictly tied to the wavelength of visible radiation (you can use interference and so on, but all this is already exotic). There are several basic solutions to this problem.

It all started with an auto-electronic projector (1936), which was later modified into an auto-ionic (1951). The principle of its operation is based on the rectilinear motion of electrons and ions under the action of an electrostatic force directed from the nanoscale cathode to the anode-screen of the macroscopic dimensions we already need. The picture that we observe on the screen is formed at or near the cathode due to certain physical and chemical processes. First of all, this is the extraction of field electrons from the atomic structure of the cathode and the polarization of atoms of the "imaging" gas near the cathode tip. Having formed, the picture in the form of a certain distribution of ions or electrons is projected onto the screen, where it is manifested by the forces of fluorescence. In this elegant way, you can look at the nanostructure of the tips made of certain metals and semiconductors,but the elegance of the solution here is tied to too tight restrictions on what we can see, so these projectors have not become very popular.

Another solution was the literal sense of the surface, first realized in 1981 as a scanning probe microscope, which was awarded the Nobel Prize in 1986. As you might guess from the name, the surface to be examined is scanned with a probe, which is a pointed needle.

Scanning Probe Microscope

Image
Image

© Max Planck Institute for Solid State Research

An interaction occurs between the tip and the surface structure, which can be determined with high accuracy even by the force acting on the probe, even by the arising deflection of the probe, even by the change in the frequency (phase, amplitude) of the oscillations of the probe. The initial interaction, which determines the ability to investigate almost any object, that is, the universality of the method, is based on the repulsive force arising from contact and on long-range van der Waals forces. It is possible to use other forces, and even the emerging tunneling current, mapping the surface not only in terms of the spatial location on the surface of nanoobjects, but also their other properties. It is important that the probe itself is nanoscale, otherwise the probe will not scan the surface,and the surface is a probe (by virtue of Newton's third law, the interaction is determined by both objects and, in a sense, symmetrically). But on the whole, this method turned out to be both universal and possessing the widest range of possibilities, so it became one of the main in the study of nanostructures. Its main drawback is that it is extremely time consuming, especially when compared to electron microscopes.

Electron microscopes, by the way, are also probe microscopes, only a focused electron beam acts as a probe in them. The use of a lens system makes it conceptually similar to optical, although not without major differences. First and foremost: an electron has a shorter wavelength than a photon, due to its massiveness. Of course, the wavelengths here do not belong to the particles, the electron and the photon, but characterize the behavior of the waves corresponding to them. Another important difference: the interaction of bodies with photons and with electrons is quite different, although not devoid of common features. In some cases, the information obtained from interaction with electrons is even more meaningful than from interaction with light - however, the opposite situation is not uncommon.

Image
Image

And the last thing that should be paid attention to is the difference in optical systems: if material bodies are traditionally lenses for light, then for electron beams these are electromagnetic fields, which gives greater freedom to manipulate electrons. This is the "secret" of scanning electron microscopes, the image on which, although it looks like it was obtained in an ordinary light microscope, is made so only for the convenience of the operator, but is obtained from a computer analysis of the characteristics of the interaction of an electron beam with a separate raster (pixel) on samples that are subsequently scanned. The interaction of electrons with a body makes it possible to map a surface in terms of relief, chemical composition, and even luminescence properties. Electron beams are capable of passing through thin samples,which allows you to see the internal structure of such objects - down to atomic layers.

These are the main methods for distinguishing and investigating the geometry of objects at the nanoscale level. There are others, but they work with entire systems of nanoobjects, calculating their parameters statistically. Here is the X-ray diffractometry of powders, which allows you to find out not only the phase composition of the powder, but also something about the size distribution of crystals; and ellipsometry, which characterizes the thickness of thin films (a thing that is indispensable in the creation of electronics, in which the architecture of systems is created mainly in layers); and gas sorption methods for the analysis of specific surface area. The language can be broken with the names of some methods: dynamic light scattering, electroacoustic spectroscopy, nuclear magnetic resonance relaxometry (however, it is simply called NMR relaxometry).

But that's not all. For example, a charge can be transferred to a nanoparticle moving in air, after which an electrostatic field can be turned on and, depending on how the particle is deflected, its aerodynamic size can be calculated (its friction force against air depends on the particle size). By the way, in a similar way, the size of nanoparticles is determined in the already mentioned method of dynamic light scattering, only the velocity in Brownian motion is analyzed, and moreover indirectly, from fluctuations in light scattering. The hydrodynamic particle diameter is obtained. And there is more than one such "clever" methods.

Such an abundance of methods that seem to measure the same thing - size, has one interesting detail. The value of the size of one and the same nano-object often differs, sometimes even at times.

What size is correct?

It's time to recall ordinary metrology here: the measurement results, in addition to the actual measured value, are also set by the measurement accuracy and the method by which the measurement was carried out. Accordingly, the difference in the results can be explained by both different accuracy and the different nature of the measured values. The thesis about the different nature of the different sizes of the same nanoparticle may seem wild, but it is. The size of a nanoparticle in terms of its behavior in an aqueous dispersion is not the same as its size in terms of adsorption of gases on its surface and is not the same as its size in terms of interaction with an electron beam in a microscope. Not to mention the fact that for statistical methods it is impossible to speak about a certain size, but only about a value that characterizes the size. But despite this differences (or even because of them), all these results can be considered equally true, just saying a little about different things, looking from different angles. However, these results can be compared only from the point of view of the adequacy of relying on them in certain situations: to predict the behavior of a nanoparticle in a liquid, it is more adequate to use the value of the hydrodynamic diameter, and so on.

All of the above is true for conventional metrology, and even for any record of facts, but this is often overlooked. We can say that there are no facts more true and less true, more consistent with reality and less (except perhaps forgery), but there are only facts that are more and less adequate for use in a given situation, as well as based on more and less the correct interpretation for this. Philosophers have learned this well since the time of positivism: any fact is theoretically loaded.