## Boudewijn van Milligen Home Page## Areas of research:## Probabilistic transport modelsTransport in fusion plasmas is complex, as evidenced by a number of "strange transport phenomena" such as: - profile resilience (or profile stiffness), i.e. the small modification of observed profiles of density and temperature when the particle and heat sources are varied by a large amount;
- power degradation, i.e. the reduction of the particle and heat confinement times when the sources are increased (for simple diffusion one would expect the confinement times to be independent of the sources);
- uphill transport, i.e. under specific circumstances transport can go in the direction 'up the gradient' (quite contrary to what one would expect for simple diffusion);
- fast transport phenomena, e.g. when generating a small perturbation at the edge of the plasma, the core may react almost instantaneously.
## Analysis of turbulenceThe analysis of turbulence in fusion plasmas is a topic which is at least one order of magnitude more difficult than the analysis of turbulence in fluids, for various reasons: (a) The inaccessibility of the system: due to the hostile conditions in the hot plasma it is difficult if not impossible to insert probes. (b) The complexity of the system: due to its ionized nature and the presence of a topologically complex and strong magnetic field that interacts with the plasma, the system possesses more degrees of freedom than a neutral fluid. For these reasons, measurements available are usually point measurements made by inserting probes into the relatively cold plasma edge or line integral measurements made by shining e.g. laser beams through the central plasma. Measurements in the edge plasma have the disadvantage of being strongly influenced by the presence of the vessel wall, which exerts a profound influence on the plasma and the turbulence (e.g. through neutrals). Only recently, techniques are being developed to perform point measurements in the plasma core. On the other hand, line integral measurements average out much of the interesting turbulent effects. Our work on turbulence has focussed mainly on the analysis of edge probe data, although some analysis was done on reflectometry signals. A large effort was devoted to the development of new analysis techniques, since the available tools were not satisfactory. In particular, it is well known that turbulence may be intermittent and involve non-linear interactions. Fourier spectral analysis is not capable of handling signals whose spectral "fingerprint" changes with time, since it assumes a static situation, and therefore "blurs" the available information. Wavelet analysis, a technique for resolving spectral characteristics with time resolution, is much better adapted to this kind of problem. Non-linear interactions can be detected by means of higher-order spectra (e.g. quadratic interactions can be detected through the bi-spectrum). With Fourier analysis, however, in order to achieve statistically significant values for the bi-spectrum, very long time series are necessary. This fact has mostly precluded its use in fields like plasma turbulence, since long steady-state data series are not generally available. In our work, for the first time, the bicoherence was calculated using wavelet transforms, thus making the detection of non-linear interactions with time resolution possible [J7, J8, J10, J17] Click here to view an example of the application of wavelets and bicoherence. The shape of the autocorrelation function (ACF) of turbulent signals reveals some of the properties of the underlying mechanisms of generation of the turbulence. Unfortunately, the most revealing information is present in the tail of the distribution (i.e. well beyond the correlation time), where statistics are generally poor. In particular, the ability to discern between an algebraic or exponential decay of the ACF at large lags would provide an indication whether recently proposed Self-Organized Criticality (SOC) models could be appropriate descriptions of the turbulence. These models predict transport by avalanches, which would generate self-similar behaviour in space and time of the turbulent data and thus lead to the mentioned algebraic decay. Such possible self-similarity can be quantified by the Rescaled-Range analysis technique and the Hurst exponent. We show that this type of analysis is far more robust against random noise perturbations than the direct determination of the ACF or the Probability of Return. The analysis of data from Langmuir probes taken at the plasma edge in a wide
variety of fusion devices reveals the existence of self-similar behaviour or
long-range correlations in all devices studied. The observed variation of
the Hurst exponent in the plasma edge, 0.62 < The repeated occurrence of values of H differing significantly from the
value corresponding to random noise ( Main points: - Wavelet analysis
- Bicoherence
- Chaos analysis techniques
## Equilibrium calculations and magnetic field analysisThe calculation of ideal MHD equilibria is an important tool for both theoretical studies and for the data analysis in thermonuclear fusion. While the calculation of tokamak (toroidally symmetric) equilibria is well developed, up to the point that such calculations can be carried out with very modest CPU time requirements, the calculation of stellarator equilibria, due to its intrinsic complexity, still requires such large amounts of CPU time that online experimental analysis does not seem to be feasible.
Main points: - Neural Network-based differential equation and MHD equilibrium solvers
- Toroidal Harmonics (multipolar moments)
## Analysis of MHD signals (islands)
## Fast reconstruction techniquesFor the analysis and the control of discharges in nuclear fusion devices, techniques are required that provide fast (on the millisecond timescale) information about the plasma conditions. The speed that is required does not allow for thorough on-line equilibrium calculations. One possible solution is the calculation of approximate quantities (e.g. plasma position and shape), using analytic approximations, but often more complex derived quatities are also required for which this is not a practical option, and one may require more accuracy than an estimate "by hand" can provide. Therefore, the most practical solution is the following: computing the theoretical equilibrium of the plasma for a (large) number of cases, while trying to cover the whole parameter space liable to be encountered in the experiments. All required quantities and parameters can be calculated from these theoretical equilibria with high accuracy, and the measurements that will be made on the experiment can be simulated. Then, by means of a statistical analysis, direct relationships can be retrieved between the simulated measurements and the plasma parameters. These relationships, which are easy and fast to evaluate, can then be applied to experimental data. Two techniques have been used with success to achieve this purpose. Function parametrization [J3, J2] recovers the required relationships by means of a polynomial fit to the simulations, while the neural network method simply takes the simulations as its "training" data set and the experimental data as its "testing" data set. Neural networks have also been applied to the fast recovery of a single plasma equilibrium. In stellarators, the calculation of an equilibrium yields a description e.g. in Fourier modes of the magnetic flux or the field. The number of modes required for an accurate representation can often be quite large. The representation of the flux or field in terms of a neural network turns out to be more compact [J16]. Further, the inversion of the map R(flux), phi(flux), Z(flux), which is produced by the equilibrium calculations, is quite cumbersome since it requires the definition of a local map on a rather fine grid for accuracy. By simply training a neural network with the map flux(R,phi,Z), this problem is eliminated and the inverse map, once obtained, is very fast to evaluate. As a bonus, the spatial derivatives are also available analytically. Thus, e.g. ray tracing codes can be sped up enormously [J12]. Main points: - Function Parametrization
- Neural Networks
## PublicationsMore information |