Electricity and magnetism#
“Science is an ongoing process. It never ends. There is no single ultimate truth to be achieved…because this is so, the world is far more interesting.”
Carl Sagan
Physics as we know today may have started with Newton, but it most certainly did not end with him. Newton’s contemporaries (and sometimes rivals) continued to build on his work. In the following centuries, advancements in physics resulted in the successful theories explaining everything from sound, to mechanical vibrations, to heat flow, to even the tides. But among the most long-lasting - and most successful - achievements of classical physics is the classical theory of electricity and magnetism.
Electrostatic force#
The classical theory of electricity and magnetism concerns the behavior of charged objects. Objects are charged due to an imbalance in their number of protons and electrons. As protons are immobile, the movement of electrons causes changes in charge. Objects gaining electrons become negatively-charged, and objects losing electrons become positively charged. We measure charge using units of coulombs and typically use the symbol \(q\) for charge.
Like charges attract, and opposing charges repel, causing a force between any two charges placed together. The magnitude of this attraction (or repulsion) can be found from Coulomb’s force law, expressed as follows:
Here, \(r\) is the separation between the charges, and \(k\) is the Coulomb constant, equivalent to about \(8.99 \times 10^9\ \mathrm{N \cdot m^2 / C^2}\). It is common to call \(\vec F_e\) the electrostatic force - here, we’ll use the term electric force, which is more intuitive (though less accurate). Electrostatic means that charges are moving so slowly that we may consider them essentially stationary (static). In many cases, we can assume that this approximation holds true, even if charges are technically moving, as long as the charges are relatively slow-moving.
Note
It is common to write \(k = \dfrac{1}{4\pi \varepsilon_0}\) in equations involving electrostatics, where \(\varepsilon_0\) is the electric constant, but here we have chosen to just use \(k\) for simplicity.
The vector form of Coulomb’s force law is found by taking the scalar form and adding a unit vector \(\hat r_{12}\) pointing between the two objects. This, however, is not as simple as it may seem, because a force must be the action of one object on another object. Thus the force of charge \(q_1\) acting on \(q_2\), which is a vector, is not the same vector as the force of charge \(q_2\) acting on \(q_1\) (in fact they are opposite in direction, by Newton’s third law). Therefore, we must define two different forces, \(\vec F_{12}\) for the force exerted by charge \(q_1\) on charge \(q_2\), and \(\vec F_{21}\) for the force exerted by charge \(q_2\) on charge \(q_1\). They are written as follows:
Here, \(\hat r_{12}\) is the unit vector pointing from \(q_1\) to \(q_2\), and similarly, \(\hat r_{21}\) is the unit vector pointing from \(q_2\) to \(q_1\):
A particularly nice quality about Coulomb’s force law - and the whole of electricity and magnetism in general - is that electric forces (and as we’ll see later on, magnetic forces) obey the superposition principle. This means that the combined electric (or magnetic) force is just a sum of the individual forces pointing between each of the charges. This also means that the differential equations in the theory of electricity and magnetism are all linear differential equations, and two solutions can be added together to find a new solution without any extra work. This will be a very important and useful fact later on.
Electric field#
We recall that Coulomb’s force law, like any force, is subject to Newton’s second law, and thus results in a differential equation that can be solved to find the trajectories of each of the two charges. In the case of two charges \(q_1, q_2\) interacting, the differential equations are:
However, if there are more than two charges interacting, the forces between all the charges must be accounted for, meaning that the differential equations grow extremely long and become solvable only by computer. For this reason, while Coulomb’s force law is sometimes useful, a field formulation is the far more preferred method of mathematically modelling the interactions of charges.
Recall that fields (in physics) denote quantities that are continuously spread out across all of space, such as the gravitational field and gravitational potential (potential field). The electric field is a vector field produced by a charge and extends throughout space. In the field formulation, instead of a charge directly exerting a force on other charges, it is the electric field of the charge that exerts the force. Every other charge also produces an electric field that exerts a force on all the other charges except themselves. Each of these electric fields, that we’ve considered separate up to this point, are really just labels for parts of one electric field that carries forces between all charges. Every charge produces part of the electric field, and the electric field exerts a force on every charge, determining the trajectories of each charge. In words inspired by the physicist John Archibald Wheeler, we may surmize that:
Charges make the electric field change, the electric field makes charges move.
While technically there is no distinction between the electric fields of different charges - they are all part of the same singular electric field - it is mathematically convenient to speak of electric fields specific to a single charged object. For instance the electric field produced by a single point charge \(Q\) located at the origin is given by:
This electric field exerts an electric force \(\vec F_e\) on any other charge \(q\), given by:
Note
Be sure to remember that the \(Q\) here is of the charge creating the electric field, and \(q\) is of the charge that feels the force from the electric field.
Since the electric field is a vector field - more precisely, a force field - we can visualize it with vector plots. Using the example of our point charge, the electric field vectors extend outwards if the point charge is positive, and inwards if the point charge is negative:
When the electric field is created by two charges of opposite sign, we call it a dipole. There exist electric fields produced by more than two charges, right up to arrangements of uncountably many charges. In any case with more than one charge, we must superimpose (sum) the individual electric fields from each charge, resulting in an electric field formed by a superposition of charges:
Here, \(\vec r_i\) is the position of a charge \(Q_i\) in the collection of charges, and \(\hat r_i\) is the unit vector pointing from \(\vec r_i\) to \(\vec r\) (this direction can be somewhat confusing: draw these vectors out on paper to see why it’s the case).
Note
Unless otherwise specified, we always use \(\vec r = \langle x, y, z\rangle\) as the position vector pointing from the origin to point \((x, y, z)\) in space.
This formula may be slightly clearer if we rewrite it explicitly in terms of coordinate (rather than vector) form:
If we want to continue this process by considering a collection of ever-smaller charges, we can find the electric field of a continuous distribution of charges by integration. To do so, we shrink the charges \(Q_i\) to infinitesimal charges \(dq\), then integrate along every point \(\vec r' = (x', y', z')\) within the charge-containing region. The charge distribution can be assumed to be continuous as the charges shrink to very very small, so we may define a charge density function \(\rho(\vec r')\), where \(dq' = \rho(\vec r')\, dV'\) is the infinitesimal amount of charge contained in a tiny region of space \(dV'\) within the charge-containing region. The electric field produced by the entire distribution of charges is then given by:
Where \(\hat r' = \dfrac{\vec r - \vec r'}{|\vec r - \vec r'|}\) is the unit vector pointing from a point \(\vec r' = (x', y', z')\) within the charge-containing region to point \(\vec r = (x, y, z)\). All the primes in the integral (e.g. \(\vec r', \hat r', dV'\)) indicate points within the charge-containing region, as we integrate over every point within that region, whereas all the unprimed coordinates (e.g. \(\vec r\)) indicate a point in space (which can be outside the charge-containing region). We may also write this explicitly in Cartesian coordinates as follows:
This is the integral form of Coulomb’s law for the electric field, and does give the right expression for the electric field, at least when the electrostatic approximation holds. It is (understandably) rather tedious to solve and often can only be solved by computer.
Note that the integral form of Coulomb’s law has three specific cases depending on whether the charge density is a linear charge density \(\lambda(\vec r')\), surface charge density \(\sigma(\vec r')\), or volume charge density \(\rho(\vec r')\).
For linear charge distributions, we have \(dq' = \lambda(\vec r') dr'\) where \(dr'\) is the line element of the linear charge distribution to integrate over (e.g. charged rod, loop of wire, etc.) This means the integral becomes a line integral betweem the endpoints of the linear charge distribution (e.g. between ends of wire, 360 degrees around a circular loop, along the path of a charge helix, etc.):
Again, we may choose to work in component form by writing \(\vec E = E_x \hat i + E_y \hat j + E_z \hat k\), for which we have:
Note that \(\lambda(x', y', z')\) describes the charge density along a line of charge, not at every point in space. It may be more helpful to think of \(x', y', z'\) as the parametric curve given by:
where \(s\) is a parameter. Parametrization is not needed for actually doing calculations in the integral; this is just a way of building intuition.
Given its level of complexity, the component form of Coulomb’s law for the electric field is only good for situations where the (possibly curved) line of charge is aligned along one axis, or when doing computer-based calculations. It, however, illustrates several easy-to-miss aspects about Coulomb’s law. First, all three components of the electric field are integrated over the same region. This means that, for instance, if we consider a line of charge purely along the \(x\)-axis, then \(E_x, E_y, E_z\) are all integrated over \(\lambda(x')dx'\), even though they are components of the electric field along different directions. Second, the magnitude of the displacement vector is the same regardless of whether we compute \(E_x\), \(E_y\), or \(E_z\). This is why each of the integrals has the same \([(x - x')^2 + (y - y')^2 + (z - z')^2]^{3/2}\) term in the denomination, even though they are different components of the electric field. While we will now examine charge distributions that are not along a line (or curve), these two properties will still hold.
For surface charge distributions (e.g. plane of charge, disk of charge, etc.) we have \(dq' = \sigma(\vec r') dA' = \sigma dx' dy'\) where \(dA'\) is the surface element of the surface charge distribution to integrate over. Therefore, Coulomb’s law becomes a surface integral over every patch of charge \(dq'\) across every patch of surface \(dA'\):
For volume charge distributions (e.g. solid sphere of charge, shell of charge, block of charge), we have \(dq' = \rho dV' = \rho(\vec r') dx' dy' dz'\) where \(dV'\) is the volume element of the volume charge distribution to integrate over. Therefore, Coulomb’s law becomes volume integral over every infinitesimal volume \(dV'\) containing charge \(dq'\):
Note
It is important to remember that in all cases of applying Coulomb’s law, the integrals are always done with respect to the primed coordinates. That is, we integrate over \(x', y', z'\), not \(x, y, z\). Each point \((x', y', z')\) represents a point within the charge distribution, and we integrate over the contribution to the electric field from all the points within the charge distribution to be able to find the total electric field. In fact, you should consider any \(x\), \(y\), or \(z\) appearing in the integral as constants, as we integrate over \(x', y', z'\) instead of \(x, y, z\).
It is common to say that applying Coulomb’s law is finding the electric field by brute-force, understandably, given its tediousness. But there is a more elegant way of computing the electric field. Suppose we analyze an bounded region of space around an electric field. For instance, this could be the spherical region around a point charge, as shown in the figure below:
We could model this spherical region as a sphere (unsurprisingly). Then the region would be defined by a sphere of radius \(r\), which contains a total volume of:
Which is just the volume of a sphere. Now, we can describe the bounds of this region as the edge of the sphere. The edge of the sphere is simply its surface, and the formula of the surface area of the sphere is given by:
If we divide the surface of the sphere into tiny “patches” of surface, each of size \(dA\), then we can integrate the electric field over the entire surface of the sphere to get the total electric field “passing” the boundary of the sphere. We call this the electric flux, symbol \(\Phi_E\), and we can write it as an integral (the circle means “closed boundary”, i.e. no holes in the boundary), like this:
We can illustrate the flux across the spherical boundary (the technical term is Gaussian surface) of our spherical region as follows:
Note
The reason this is a dot product rather than normal multiplication is that the amount of electric field “passing” the boundary of our spherical region also depends the angle between the electric field and the surface. In this case, the electric field vectors are perfectly perpendicular to the surface, so it reduces to a normal dot product, but this is not always the case.
Through a mathematical law called Gauss’s law, the total amount of flux is equal to the total charge density within the region, multiplied by a constant:
This leads to a crucial result: the electric field passing through the bounds of our spherical region is determined by the total amount of charge within the spherical region’s volume. Thus, if we know the total charge within the spherical region, then we can figure out the electric field within it! In addition, Gauss’s law also can be converted (through some vector calculus identities) into a partial differential equation. We call it the differential form of Gauss’s law, and it takes the following form:
While not as easy to work with, solutions to the differential form of Gauss’s law are perfectly equivalent to the integral version, and also give you the electric field.
Electrical potential energy and electric potential#
The electrical potential energy is the energy stored in a collection of charges, and we denote it by \(U_E\). For two charges \(q_1, q_2\), the electric potential energy stored within the system of charges is given by:
The electric potential energy arises as a consequence of the electric force. When the electric force is attractive, charges are bound together, and it takes energy to “knock” a charge out of position. An example of this is within an atom, where the positively-charged protons and the negatively-charge electrons are held together by an attractive force, preventing them from flying away from each other. In such cases, the electric potential energy of the system is negative, meaning that the system is stable (physics-wise) and an amount of energy equal in magnitude to the electric potential energy must be put in to break apart the system. The converse is true if the electric force is repulsive. In this case, no energy is required to “knock” the like charges out of position; in fact, it would take energy to keep the like charges in place. So the electric potential energy of the system is positive, meaning that energy must be released - typically by converting the potential energy of the system to some other form of energy - to keep the system together.
But just as we found that electric fields are mathematically more useful than simply analyzing electric forces, there exists a more useful and elegant way to formulate electric potential energy. Instead of considering the electric potential energy of a collection of charges, we examine just a single charge \(Q\), and ask what the electric potential energy would theoretically be if we chose to place another charge \(q\) at some position \(\vec r\) to form a two-charge system. We may then write the electric potential energy as \(U_E = q V\), where \(V(r)\) is the electric potential:
The electric potential describes what the electric potential energy would be if we place an arbitrary number of charges at any location in space. Unlike the electric potential energy, it is a function of position, which means that it is not a number; it is a field. More specifically, it is a scalar field that is closely related to the electric field. But first, it may be helpful to have a more intuitive picture.
The electric potential can be thought of like electrical pressure of a sort. Just as normal water pressure causes fluid motion of water molecules that are within the water, voltage causes the motion of the charges placed in an electric field. One may say that the electric potential “pushes” (positive) charges from regions of higher energy to lower energy - outwards if the electric potential is positive, and inwards if the electric potential is negative (the opposite is true for negative charges). The strength of this “push” depends on the difference in electric potential between two points; if two points have nearly the same electric potential, the “push” is very weak, but if two points have a great difference in electric potential, the “push” can be very strong. But a distribution of charges creates an electric field! Thus the electric potential’s variation in space is also the source of the electric field, which we may write mathematically as:
Voltage, also called potential difference, is when you evaluate the difference in potential between two points, \(V(b) - V(a)\), just as we alluded to earlier. These two points could be one point on a wire in the air and the other point on the ground. Another two points could be a charge in empty space and a point infinitely far away. The first point pair is most useful for calculations that require accuracy, while the second is most useful for calculations where we can approximately assume that the potential would become weaker and weaker at longer distances from a charge.
With all that being said, how do we actually go about calculating the electric potential? There exist two primary ways of computing the electric potential. The first, the sum/integral method, is very similar to that of the electric field. In the discrete case (i.e. individual charges at different locations) it reads:
While in the continuous case, it reads:
Again, \(\vec r - \vec r_i\) represents the vector pointing from the location \(\vec r_i\) of a given charge to \(\vec r\), and \(\vec r - \vec r'\) represents the vector pointing from a given point \(\vec r'\) in the charge distribution to (the position vector) \(\vec r\).
The second method of computing the electric potential, however, is the far more elegant way of computing the electric potential, and it is Poisson’s equation. It reads:
Poisson’s equation is a partial differential equation (PDE) that can be solved for the electric potential using any number of techniques for solving differential equations. Once solved, the electric field is easily found through \(\vec E = -\nabla V\), and from there, the equations of motion (differential equations describing the trajectories) of any charge in the electric field can be found.
The magnetic field#
Up to this point, we have considered electrostatics, where charges are slow-moving or don’t move at all. But what happens if charges do move? We observe that a strange new force shows up, one that is distinct yet strangely similar to the electric force. We call this force the magnetic force.
A magnetic force arises whenever there is a current. A current is a flow of charge; more precisely, we define it as:
Where \(I\) is the current, and \(Q\) is the amount of charge passing through a given cross-section of wire at a given time \(t\). Just as we defined \(\rho\) as the charge density, we may also define a current density, denoted \(\vec J\), where \(\vec J\) is given by:
Here, \(A\) is the cross-sectional area through which the current flows, and \(\hat I\) is the direction of the motion of positive charges. Why a current density would ever prove useful will be revealed in a few sections.
The crucial other component that makes magnetic forces possible is the magnetic field. The magnetic field is like the electric field in some ways; it is a vector field, it acts on charged objects, and it spans across all space. But it is also different; it is velocity-dependent and vanishes as charges slow to a stop. In addition, there are no magnetic charges - in fact, magnetic charges, also called monopoles, are forbidden. The closest physically-possible analogue to a “magnetic charge” is a magnetic dipole, consisting of two opposite charges.
We write the magnetic field as \(\vec B\), and the magnetic field strength is given in units of Tesla, shorthand \(\text{T}\). The magnetic force can then be written in one of two ways:
These two forms describe the magnetic force generated by 1) a moving point charge \(q\) and 2) a current-carrying segment of wire of current \(I\). Note that both expressions use a cross product: this is because the magnetic force is always perpendicular to the direction of the moving charges.
Note
What about the magnetic fields and forces generated by permanent magnets like bar magnets or fridge magnets? The answer is that while they can be macroscopically-modelled with classical theory, the full explanation for why they remain magnetized even without moving charges requires special relativity and quantum mechanics.
How do we compute the magnetic field though? As with before, there are several different options. But first, let’s cover an option that you perhaps would think could work, but doesn’t actually work. Perhaps you would think that since there is a Gauss’s law for the electric field, there would also be one for the magnetic field. Indeed, there is actually one, but it is rather disappointing:
Which leaves much to be desired. The formal reason for why Gauss’s law for magnetic fields is this way, however, is important conceptually. Remember how we said that there are no magnetic charges. But the right-hand side of Gauss’s law for electic fields is the total charge enclosed within a region of space. Since a magnetic charge is not defined, the right-hand side of Gauss’s law for magnetic fields has to be zero - after all, there are no charges!
In addition, the differential version also tells us that magnetic fields always come in loops - the vectors follow looping field lines that ultimately trace back to the point they started from - which means that the magnetic field has no net divergence.
So instead of Gauss’s law for magnetism, we instead use an analogue of Coulomb’s law for the electric field. This is the Biot-Savart law. In the discrete case, for charges located at distinct points in space, it takes the form:
In the continuous case, where we consider many, many moving charges that form a continuous current, Biot-Savart’s law becomes an integral:
Where \(\vec{d\ell}\) is a current-carrying path segment (such as a segment of wire), \(\vec r'\) is a point along the current-carrying path, \(\vec r\) is the position vector, and \(\mu_0\) is the magnetic constant. This integral’s particular form means that Biot-Savart’s law is a (vector) line integral over the current-carrying path (which is usually, though not always, a wire), as we integrate over every portion of that path.
As with Coulomb’s law for the electric field, Biot-Savart’s law is very tedious to work with and can often be only solved by computer. In addition, just like Coulomb’s law for the electric field only works when the electrostatic approximation can be made, Biot-Savart’s law only works when the magnetostatic approximation can be made, meaning that charges move at near-constant velocities and currents change very slowly, if at all.
Note
The magnetostatic and electrostatic assumptions mean that the electric fields (in the former) and magnetic fields (in the latter) do not change - we say they are static. Thus, as long as the assumptions hold, neither field has a dependence on time.
The second method that can be used is very similar to Poisson’s equation for electric fields. If we define a magnetic potential \(\mathbf{A}\) such that \(\mathbf{B} = \nabla \times \mathbf{A}\), the magnetic potential can be solved for by Poisson’s equation for the magnetic field, which reads:
In Cartesian coordinates, this expands to:
This method is used more often in advanced physics such as relativity and quantum mechanics, but it generally follows the same approach as using Poisson’s equation for the electric field - solve the (partial) differential equation using any number of methods (and computer if need be), then find the magnetic field by taking its curl, that is, \(\mathbf{B} = \nabla \times \mathbf{A}\). Although elegant, Poisson’s equation for the magnetic field only works in cases when the magnetostatic approximation holds. In many cases, we cannot make this assumption.
The third method for computing the magnetic field, however, works universally even when currents are not constant and charges are changing velocity. It is what we will cover next - Maxwell’s equations.
Electrodynamics and Maxwell’s equations#
It is a well-known fact that light in all its forms (radio, gamma, x-ray, microwave, visible, UV, and more) is an electromagnetic phenomenon, so to be able to describe it, we need to use
Up to this point, we have discussed electostatics and magnetostatics, but we will now cover electrodynamics - the more general study of electricity and magnetism in cases where electric and magnetic fields change through time. The laws of electrodynamics are given by the system of partial differential equations known as Maxwell’s equations. These equations govern the evolution of the electric and magnetic fields, and they read:
These equations are also often appear in integral form, in which they are given as:
The Maxwell equations show a surprising fact: oscillating electric fields can actually induce magnetic fields, and oscillating magnetic fields can actually induce electric fields. So rather than two separate phenomena, electricity and magnetism are actually interrelated phenomena, caused by the interplay of electric and magnetic fields. Thus, we often group electricity and magnetism together as electromagnetism, and speak of an electromagnetic field as the combination of the electric and magnetic components of the field.
The equation that govern the motion of charged objects in an electromagnetic field is the Lorentz force law, which reads:
Solving the Maxwell equations is a topic so extensive that it is its own field. We’ll sketch out one of the most common methods; however, to avoid making this chapter overwhelmingly long, we will have to skip a lot of the details.
The general process of this method is to first set boundary conditions such that the first two equations (Gauss’s laws for the electric and magnetic fields) hold true. Then, we are left with the two remaining equations:
The top equation is called Faraday’s law, and the bottom equation is called the Maxwell-Ampère law. Now, it turns out that by reworking the electric and magnetic potentials, we can substitute them in to find an electromagnetic potential, assuming that:
Then, if we impose something called the Lorenz gauge condition (it’s something we will cover more in the Expert’s Guide), we can substitute the potentials into Maxwell’s equations to get:
These equations are much easier (though by no means easy) to solve and are often used in advanced electromagnetic theory to solve for complicated field configurations. However, we won’t go that far, at least for now. Rather, we’ll explore one simplified case of Maxwell’s equations, and its profound consequences on the physical nature of light.
Note
For more detailed information about Maxwell’s equations, we recommend reading A student’s guide to Maxwell’s equations, which is slower-paced compared to the guide here in the Elara Handbook.
Electromagnetic waves#
When only simulating electromagnetic waves radiating within space, and not the source currents or charges, Maxwell’s equations can be simplified by setting \(\rho = \mathbf{J} = 0\), resulting in two wave equations:
Where \(c^2 = \frac{1}{\mu_0 \varepsilon_0}\) and \(c\) is the speed of light. These are called the electromagnetic wave equations because their solutions are very similar to classical wave solutions to the generalized wave equation, such as solutions describing sound waves or the waves formed by a vibrating string. But these waves are special - their speed of propagation is the speed of light. That is to say, the electromagnetic wave equations describe light, and their solutions describe all forms of light, from visible light in all its colors to X-rays to gamma rays to the microwaves and radio waves that carry global communications and internet.
If we take a close look at the electromagnetic wave equations, electromagnetic waves are just a special case of electric and magnetic fields, with the special property that they oscillate throughout space without needing any wires or currents. This is also what makes them ideal for wireless applications, such as WiFi, communications, or in our case, wireless power transfer. They are generated when there are both time-varying electric and magnetic fields.
But what physical configurations can generate electromagnetic waves? Recall from Maxwell equation #3 (Faraday’s law) that a changing magnetic field \(\mathbf{B}(t, \mathbf{x})\) causes (induces) an electric field \(\mathbf{E}_\mathrm{ind}(t, \mathbf{x})\). The induced electric field \(\mathbf{E}_\mathrm{ind}(t, \mathbf{x})\) can then induce a changing magnetic field \(\mathbf{B}_\mathrm{ind}(t, \mathbf{x})\) by Maxwell equation #4 (Ampère-Maxwell law). However, for this to be true, the original magnetic field must satisfy the condition \(\displaystyle \frac{\partial^2 \mathbf{B}}{\partial t^2} \neq 0\). This means it must not be a constant or linear function, because otherwise, the induced electric field from the original magnetic field would produce a constant (or zero) electric field by Maxwell equation #4. In mathematical terms, wave solutions require a current (density) that satisfies \(\displaystyle \frac{\partial^2 \mathbf{J}}{\partial t^2} \neq 0\). That is to say, the current (density) must also be twice-differentiable in time and not constant or linear, so it cannot be a steady current. And so we have found our answer: in the case where currents are non-constant, which means they are changing with time, electromagnetic waves are produced.
The classical method of generating electromagnetic waves is to use some type of oscillating current, especially one that follows a sine or cosine curve (AC currents) - this is the basic operating principle of the antenna. Another method is to pass current through some form of spinning loop within a magnetic field (such as that generated by a magnet); the changing angle of the loop leads to induced electric fields that produce a changing current, generating (albeit weak) electromagnetic waves (typically radio waves, which can be amplified). A third, and the most common, method is to accelerate charges along a path; this principle is used in magnetrons for microwave ovens, where fast-moving (and accelerating) electrons move through a magnet field, producing (unsurprisingly) microwaves.
But what about the Sun, you may ask, or a light bulb, or a general hot object like a fire or plasma? In these cases, the mechanisms that produce light are quantum in nature. We will discuss how that occurs in the next chapter, “The Specifics”.