Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
A 150 MHz all sky survey with the Precision Array to Probe the Epoch of Reionization
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
A study of why some physic concepts in the South African Physical Science curriculum are poorly understood in order to develop a targeted action-research intervention for Newton’s second law
- Authors: Cobbing, Kathleen Margaret
- Date: 2020
- Subjects: Physics -- Study and teaching (Secondary) -- South Africa , Physics -- Examinations, questions, etc. -- South Africa , Motion -- Study and teaching (Secondary) -- South Africa
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146903 , vital:38575
- Description: Globally, many students show a poor understanding of concepts in high school physics and lack the necessary problem-solving skills that the course demands. The application of Newton’s second law was found to be particularly problematic through document analysis of South African examination feedback reports, as well as from an analysis of the physics examinations at a pair of well-resourced South African independent schools that follow the Independent Examination Board curriculum. Through an action-research approach, a resource for use by students was designed and modified to improve students’ understanding of this concept, while modelling problemsolving methods. The resource consisted of brief revision notes, worked examples and scaffolded exercises. The design of the resource was influenced by the theory of cognitive apprenticeship, cognitive load theory and conceptual change theory. One of the aims of the resource was to encourage students to translate between the different representations of a problem situation: symbolic, abstract, model and concrete. The impact of this resource was evaluated at a pair of schools using a mixed methods approach. This incorporated pre- and post-tests for a quantitative assessment, qualitative student evaluations and the analysis of examination scripts. There was an improvement from pre- to post-test for all four iterations of the intervention and these improvements were shown to be significant. The use of the resource led to an increase in the quality and quantity of diagrams drawn by students in subsequent assessments.
- Full Text:
- Date Issued: 2020
Thermoluminescence and phototransferred phermoluminescence of synthetic quartz
- Authors: Dawam, Robert Rangmou
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/145849 , vital:38472
- Description: The main aim of this investigation is on thermoluminescence and phototransferred thermoluminescence of synthetic quartz. Thermoluminescence was one of the tools used in characterising the electron traps parameters. The samples of quartz annealed at various temperatures up to 900̊C and the unannealed were used. The thermoluminescence glow curve was measured at 1̊C s~ 1 following beta irradiation to 40 Gy from the samples annealed at 500̊C and the unannealed consist of main peak at 70̊C and secondary peaks at 110, 180 and 310̊C. In comparison, the thermoluminescence glow curve for the sample annealed at 900̊C have main peak at 86̊C and the secondary ones at 170 and 310̊C. The kinetic analysis was carried out only on the main peak in each case. The activation energy was found to be decreasing with increase in annealing temperatures. The samples annealed at 500̊C and the unannealed were found to be affected by thermal quenching while sample annealed at 900̊C shows an inverse quenching for irradiation dose of 40 Gy. However, when the dose was reduce to 3 Gy the effects of thermal quenching was manifested. The activation energy of thermal quenching was also found to decrease with increase in annealing temperature. Thermally assisted optically stimulated luminescence measurement was carried out using continuous wave optical stimulated luminescence (CW-OSL). The samples studied were those annealed at 500̊C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The CW-OSL is stimulated using 470 nm blue LEDs at sample temperatures between 30 and 200̊C. It is measured after preheating to either 300 and 500̊C. When the integrated OSL intensity is plotted as a function of measurement temperature, the intensity goes through a peak. The increase in OSL intensity as a function of temperature is associated to thermal assistance and the decrease to thermal quenching. The kinetic parameters were evaluated by fitting the experimental data. The values of activation energies of thermal quenching are the same within experimental uncertainties for all the experimental conditions. This shows that annealing temperature, duration of annealing and irradiation dose have a negligible influence on the recombination site of luminescence using OSL. Phototransferred thermoluminescence (PTTL) induced from annealed samples using 470 nm blue light was also investigated. The quartz were annealed at 500 _C for 10 minutes, 900̊C for 10, 30, 60 minutes and 1000̊C for 10 minutes prior to use. The glow curves of conventional TL measured at 1 _C s1 following irradiation to 200 Gy shows six peaks in each case labelled I-VI for ease of reference whereas peaks observed under PTTL are referred to as A1 onwards. Only the first three peaks were reproduced under phototransfer for the sample annealed at 900̊C for 60 minutes and 1000̊C C for 10 minutes. Interestingly, for the intermediate duration of annealing of 30 minutes, the only peak that appears under phototransfer is the A1. For quartz annealed at 900̊C for 10 minutes, the PTTL appears as long as the preheating temperature does not exceed 560̊C. All other annealing temperatures, PTTL only appears for preheating to 450 and below. This shows that the occupancy of deep electron traps at temperatures beyond 450̊C or 560̊C is low. The activation energy for peaks A1, A2 and A3 were calculated. The PTTL peaks were studied for thermal quenching and peaks A1 and A3 were found to be affected. The activation energies for thermal quenching were determined as 0.62 ± 0.04 eV and 0.65 ± 0.02 eV for peaks A1 and A3 respectively. The experimental dependence of PTTL intensity on illumination time is modelled using sets of coupled linear differential equations based on systems of donors and acceptors whose number is determined by preheating temperature.
- Full Text:
- Date Issued: 2020
Dynamics of stimulated luminescence in natural quartz: Thermoluminescence and phototransferred thermoluminescence
- Authors: Folley, Damilola Esther
- Date: 2020
- Subjects: Thermoluminescence , Quartz
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146255 , vital:38509
- Description: Natural quartz has remained an important mineral that is of topical interest in luminescence and dosimetry-related research. We investigate the dynamics of stimulated luminescence on this material through thermoluminescence (TL) and phototransferred thermoluminescence (PTTL). Measurements were made on unannealed natural quartz as well as quartz annealed at 800 and 1000̊C. The samples were annealed for 10 minutes and for 1 hour. The material, in its un- and annealed state has its main peak between 68 and 72̊C when measured at 1Cs ̃1 after a dose of 50 Gy. A study of dosimetric features and kinetic analysis was carried out on two prominent peaks, peak I and III for all the samples. The peaks show a sublinear dose response for irradiation doses between 10 and 300 Gy. Kinetic analysis shows that peak I is a first-order peak and peak III a general-order peak. Interestingly, we observe for peak I for the sample annealed at 800̊C for 1 hour an inverse thermal quenching behaviour. We demonstrate that a peak affected with an inverse thermal quenching-like behaviour can still show effect of thermal quenching when the dose the sample is irradiated to is significantly reduced. We ascribe the apparent dependence of thermal quenching on dose to competition between radiative and non-radiative transitions at the recombination centre. Peaks I, II, and III for all the samples were reproduced under phototransfer when the peaks, initially removed by preheating to a certain temperature are exposed to 470 and 525 nm light. The infuence of duration of illumination on the PTTL intensity of these peaks corresponding to various preheating temperatures is modelled using coupled first-order dfferential equations. The model is based on systems of acceptors and donors whose number and role depends on preheating temperature
- Full Text:
- Date Issued: 2020
Modelling and investigating primary beam effects of reflector antenna arrays
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Perseus Galaxy Cluster
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
Finite precision arithmetic in Polyphase Filterbank implementations
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Abell 773 galaxy cluster
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
Addressing flux suppression, radio frequency interference, and selection of optimal solution intervals during radio interferometric calibration
- Authors: Sob, Ulrich Armel Mbou
- Date: 2020
- Subjects: CubiCal (Software) , Radio -- Interference , Imaging systems in astronomy , Algorithms , Astronomical instruments -- Calibration , Astronomy -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147714 , vital:38663
- Description: The forthcoming Square Kilometre Array is expected to provide answers to some of the most intriguing questions about our Universe. However, as it is already noticeable from MeerKAT and other precursors, the amounts of data produced by these new instruments are significantly challenging to calibrate and image. Calibration of radio interferometric data is usually biased by incomplete sky models and radio frequency interference (RFI) resulting in calibration artefacts that limit the dynamic range and image fidelity of the resulting images. One of the most noticeable of these artefacts is the formation of spurious sources which causes suppression of real emissions. Fortunately, it has been shown that calibration algorithms employing heavy-tailed likelihood functions are less susceptible to this due to their robustness against outliers. Leveraging on recent developments in the field of complex optimisation, we implement a robust calibration algorithm using a Student’s t likelihood function and Wirtinger derivatives. The new algorithm, dubbed the robust solver, is incorporated as a subroutine into the newly released calibration software package CubiCal. We perform statistical analysis on the distribution of visibilities and provide an insight into the functioning of the robust solver and describe different scenarios where it will improve calibration. We use simulations to show that the robust solver effectively reduces the amount of flux suppressed from unmodelled sources both in direction independent and direction dependent calibration. Furthermore, the robust solver is shown to successfully mitigate the effects of low-level RFI when applied to a simulated and a real VLA dataset. Finally, we demonstrate that there are close links between the amount of flux suppressed from sources, the effects of the RFI and the employed solution interval during radio interferometric calibration. Hence, we investigate the effects of solution intervals and the different factors to consider in order to select adequate solution intervals. Furthermore, we propose a practical brute force method for selecting optimal solution intervals. The proposed method is successfully applied to a VLA dataset.
- Full Text:
- Date Issued: 2020