Quickly jump to

Technology

In order to achieve the Science Objectives a space-based mission is required, avoiding the perturbing effects (turbulence and thermal emission) of the Earth’s atmosphere. As detecting the thermal emission of exoplanets requires observations at micrometer wavelengths (the thermal emission of Earth peaks around 10 micrometer), the current project baseline foresees a nulling interferometer concept consisting of several formation flying collector telescopes with a beam combiner spacecraft at their center. The measurement principle is both simple and elegant: the instrument acts as a spectrograph from which the stellar flux is removed by destructive interference, while the signal from any orbiting planet is transmitted to the output of the instrument. This allows the nulling interferometer to observe non-transiting exoplanets and build up signal-to-noise quickly without having to launch a very large, single aperture telescope. The extremely good inner working angle,  i.e., the high spatial resolution required to separate the exoplanets’ signal from that of the host star on the detector, is provided by the separation between the collector telescope, which, in principle, can be adjusted depending on the science requirements. In addition to the nulling mode, LIFE could also be operated in constructive imaging mode for other science cases.

 

LIFE is based on the heritage of older mission concepts such as ESA’s Darwin or NASA’s TPF-I . In contrast to the mid 2000s when those concepts were studied, we today have a much more comprehensive understanding of the existing exoplanet population, allowing a much more robust quantification of science objectives, science requirements and expected science yield.

In spring 2020 we kicked off NICE – the Nulling Interferometric Cryogenic Experiment for LIFE. The goal of NICE is to build a nulling testbed to promote LIFE and to enhance the technology readiness level of broadband nulling interferometry for LIFE and ground based nullers. The long term goal of NICE is to demonstrate the required nulling performance in terms of starlight suppression and stability under cryogenic conditions and with flux levels similar to those from real astrophysical sources. Details about NICE can be found here.

 

In addition, most technologies required to fly a space-based nulling interferometer have now reached a Technology Readiness Level (TRL) of 5, which means that the components have been tested and validated in their relevant environment. In particular, key technologies that were considered immature in 2007, when most ESA/Darwin and NASA/TPF-I activities came to a hold, have now been demonstrated on test-benches (e.g., deep nulling) or in space (e.g., formation flying). More information is provided here or here.

 

A comprehensive assessment of all major technologies is currently underway, which will serve as basis for the development of a technology development roadmap.

FAQ

No, previous space missions covered a similar wavelength regime as we are planning for LIFE. This includes for instance the Spitzer Space Telescope, but also even older missions such as ISO, IRAS, AKARI (albeit not all covering exactly the same wavelength range and spectroscopic capabilities). Also, the soon-to-be-launched James Webb Space Telescope (JWST) will feature a powerful MIR instrument (MIRI) that will cover the LIFE wavelength range. However, all of these missions were designed with different primary scientific objectives in mind. Most importantly, they all lack the spatial resolution to directly detect large numbers of planets in the HZs around nearby stars. JWST, with it’s 6.5-m primary mirror,  might be able to search for rocky planets around a couple of nearby stars, but will not provide a large sample of directly detected planets. The Origin Space Telescope (OST), which is currently being assessed in the context of the NASA decadal survey, would also be a MIR mission with a primary mirror even larger than JWST. However, OST would primarily focus on transiting exoplanets around (nearby) M-stars and would not be an exoplanet imaging mission, again limiting the accessible parameter space compared to LIFE.

We proposed the ‘science of LIFE’ to ESA in the context of the currently ongoing Voyage2050 process (see, https://www.cosmos.esa.int/web/voyage-2050). This process is supposed to guide ESA’s long-term planning of the Science Program for the years ~2035-2050. Dozens of white papers were submitted covering a wide range of planetary science, astrophysics, solar physics and fundamental physics (the list of white papers is published on the web page). The outcome of the process will be – primarily – a selection of science themes / topics that will be considered for future L-class (L=large) missions. L-class missions are the biggest missions ESA carries out (typical budget up to ~1.2 BEUR) and because of their complexity they typically require the longest lead times. In case ESA selects “Exoplanets” as a science theme for a future L-mission, it is not clear at the moment how they would continue the process and start to think about potential mission concepts (e.g., how to decide, for instance, between joining one of the NASA concepts or coming up with something on their own). As written on the web page above “Our report to the Director of Science will be delivered in mid-May and a public report will be issued later this year”.

 

Update:

In the meantime the Voyage 2050 ESA Senior Committee Report was published and contained following language:

“Therefore, launching a Large mission enabling the characterisation of the atmosphere of temperate exoplanets in the mid-infrared should be a top priority for ESA within the Voyage 2050 timeframe.”

“This would give ESA and the European community the opportunity to solidify its leadership in the field of exoplanets, […]”

“Being the first to measure a spectrum of the direct thermal emission of a temperate exoplanet in the mid infrared would be an outstanding breakthrough that could lead to yet again another paradigm-shifting discovery.”

 

 

In the early 2000s the idea of a space-based mid-infrared nulling interferometer for exoplanet science was already studied by both ESA (the Darwin mission) and NASA (the TPF-I mission (TPF-I = Terrestrial Planet Finder – Interferometer)). LIFE is built on the heritage of these efforts and we take into account all their results and learnings and basically continue from where they stopped. Yet, given that our understanding about exoplanets has dramatically increased since the 2000s and also on the technology side progress has been made, we have to carry out some trade-off analysis to develop a new mission baseline. This is one of the first steps – defining a new baseline for the instrument and the mission.

From our understanding,  one of the main concerns at that time was the unknown number of planets and hence potential targets that Darwin could investigate. By that time (that was before the Kepler and TESS missions!) planetary occurrences rates were basically unknown and the lowest mass object discovered was a Super-Earth of a few Earth masses on a short period orbit. Without knowing if there even was (a chance for) an Earth-similar planet in our neighborhood was considered too high risk. Certainly there were also technical challenges  (e.g., related to formation flying) that came on-top. For LIFE, things look much better. We can make quantitative statistical predictions for the expected exoplanet detection yield thanks to NASA’s Kepler mission. And thanks to ongoing RV surveys from the ground we even have nearby targets one could immediately look at today. Also on the technology side, significant progress was made over the last ~15 years and some key results are summarized here or here.

The NASA HabEx and LUVOIR mission concepts have the same primary science objectives as LIFE: the direct detection of (small) exoplanets and the investigation of their atmospheres incl. the search for biosignatures. The main difference is that HabEx /LUVOIR will work at UV/optical/near-infrared wavelengths (possibly up to ~2.5 micron) which will allow them to detect the planets in reflected light, while LIFE will work at mid-infrared wavelengths probing the thermal emission of exoplanets. Both approaches have pros and cons. For instance, the vegetation red edge or ocean glint is only detectable in reflected light, while observations in thermal emission provide better information about the atmospheric structure, the radius of the planet and potentially its surface temperature. In this sense, it is not a question of better or worse; these approaches are complementary and synergistic and together allow for a much better understanding about the (atmospheric) properties of terrestrial exoplanets.

The number of collector spacecraft is not defined yet, but if it is higher than 2 the nulling technique can more effectively suppress the stellar light and it is also easier to discriminate axis-symmetric emission (e.g., exo-zodiacal dust) from asymmetric emission (e.g., exoplanet). In addition, the total collecting area of LIFE is important as this directly affects the sensitivity of the mission. Trade-off studies in the coming months will help us define the number of collector spacecraft and a baseline for the mission concept.

(Taken from here incl. slight modifications) Nulling interferometry is the name given to an instrumental technique based on interference between several telescopes / apertures and aiming at detecting directly exoplanets. The principle is to create a virtual “blind spot” at the exact location of a bright source, a star, in order to reveal the much fainter source, which is a planet orbiting it. Designing a nulling interferometer supposes that the beams from two (or more) large telescopes, separated by baselines of several meters to hundreds of meters, are combined coherently and that a special device, called an achromatic phase shifter, is introduced in some of the beams before recombination. This phase shifter will create a phase shift of , so that when 2 beams are combined, light from any on-axis source (i.e., a source that has an optical path difference (OPD) of zero) interferes destructively. This on-axis source is the star. Because a nearby exoplanet will not have an OPD of zero, its light is – at least partially – transmitted. In addition, since the expected brightness of the exoplanet is extremely low, the usually strong thermal background emission that characterizes observations in the mid-infrared has to be reduced: this means that the telescopes must be at a low temperature, typically below 100 K. If we add that the Earth atmosphere absorbs a large part of the mid-infrared wavelength range and that it also contributes to the thermal background, one understands that the space environment is mandatory. Finally, a very precise control of wavefront defects and a precise matching of the interferometer arms are required to effectively suppress the star light and achieve the required “null depth”.

Phase chopping is a calibration technique. In nulling interferometry there are multiple ways how the beams from the various apertures can be combined (even for the same number of apertures (e.g, four) different beam combination schemes are possible). Phase chopping is a special technique that can only be applied in specific beam combination schemes such as a Double-Bracewell Interferometer. Phase chopping basically flips the phase shift that is introduced between the two arms of the Double-Bracewell interferometer back and forth on time scales of seconds to minutes. This is much shorter than the typical integration times needed in order to detect small exoplanets. In doing so, it can help calibrate out long-term drifts (i.e., low frequency noise) within the instrument.

No, strictly speaking, rotating the array may not be absolutely necessary. However, it was shown in the past that such a rotation introduces a temporal modulation of the planet signal that appears to be very robust against instrumental noise effects. This means that the planetary signal can be more easily extracted from the data and it is easier to discriminate from background sources (e.g., asymmetric exozodi disks). One of the first steps in the coming weeks and months is to re-assess whether there are other ways to generate robust modulated signals (e.g., stretching / shrinking of the array) and what the impact of the rotation on fuel and stability requirements is.

(see also above “How does nulling interferometry work?”) The term “nulling” refers to the fact that the light of the central star is effectively suppressed (i.e., “nulled”) so that the significantly fainter signals for nearby planets become detectable. Temperate, terrestrial exoplanets are typically 107-108 times fainter at MIR wavelengths than Solar-type stars. Effective nulling is difficult because it requires accurate control of the amplitude (likely to better than 1%) and phase (likely to better than a few nm, i.e., a fraction of the wavelength) of the beams that are combined.

While interferometric nulling at mid-infrared wavelength was successfully applied at the Multiple Mirror Telescope (MMT) with the BLINC instrument, the Keck Interferometer Nuller (KIN) and the Large Binocular Telescope Interferometer (LBTI) to search for exozodiacal dust disks, thus far no exoplanets were detected using the nulling technique. However, the Hi5 project, currently under development for the Very Large Telescope Interferometer (VLTI), aims to achieve this for a small set of favorable gas giants. Some long-period gas giant planets were already detected and characterized at the VLTI with the Gravity+ instrument, but even though this is also an interferometric instrument, it does not rely on nulling interferometry.

By going to the mid-infrared regime one probes the thermal emission of the exoplanets (in contrast to investigating exoplanets in reflected light, which is done at optical/near-infrared wavelength). For reference: the thermal emission of Earth peaks at a wavelength of roughly ~11 micron. Measuring the thermal emission of an exoplanet provides constraints on its effective temperature and radius, which is not so easily done in reflected light. Also, the thermal structure of the atmosphere can be probed and – depending on the atmospheric composition – one may be able to directly constrain the surface temperature; something out of reach for reflected light missions. Furthermore, the mid-infrared range features absorption bands of many molecules incl. major constituents of terrestrial exoplanet atmospheres, trace gases as well as bio- and technosignatures.

Extending the wavelength range to shorter wavelength poses significant technical challenges for a nulling interferometer because the contrast requirement becomes more demanding, which, in turn, leads to even more stringent requirements for the phase and amplitude stability for beam combination. At the same time, because at shorter wavelength large, single aperture telescopes can provide sufficient spatial resolution to resolve the planet from its host star, the need to employ an interferometer is no longer given. That’s why NASA’s HabEx and LUVOIR concept studies feature large, contrast-optimized single aperture telescopes for the direct detection of exoplanets in reflected light.

For a mid-infrared mission that has the goal to detect the very faint signals of temperate exoplanets going to space is essential. Ground-based observations at mid-infrared wavelength suffer from significant background noise contributions from the telescope, the instrument optics and also the Earth atmosphere: everything “shines” in the mid-infrared significantly limiting the achievable sensitivity from the ground. In addition, the Earth atmosphere is not fully transparent across the mid-infrared regime. Water, carbon dioxide, ozone and other trace gases ‘block’ our view and only from space we have access to the full wavelength range.

The required number of apertures for LIFE and their diameter is still being investigated and will depend on the final scientific objectives. Our analyses have shown that in many cases LIFE will be sensitivity limited by background emission from the local zodiacal dust cloud in the Solar System, which is more severe for smaller aperture sizes.In general, smaller apertures lead to lower sensitivity and hence to longer integration times for both the detection of an exoplanet and also for obtaining a high signal-to-noise spectrum. Or, to turn this around, larger apertures will not only lead to more detections, but will also allow to obtain more high SNR spectra in the follow-up. In LIFE Paper 1, and also here, detection yield calculations were carried out for a range of aperture sizes providing a good overview of what can be achieved.

LIFEsim is in continuous development and new features will be regularly added in the coming months. It will be publicly available at https://github.com/fdannert/LIFEsim. At the moment, LIFEsim features the most relevant astrophysical noise sources, such as geometric stellar leakage, thermal background noise from the local zodiacal dust cloud and from exozodi disks, and the shot noise of the planet spectrum. It also allows the user to specify the distance to the target, the spectral type of the host star and the diameter of the apertures. Instrumental noise terms, e.g., detector noise, thermal background from the optics, instability noise, are not yet included. Also, at the moment LIFEsim features only the current reference architecture (4 collector telescopes flying in a rectangular configuration); in the context of ongoing trade-off studies more architectures may be added in the near future.

In comparison to including, for instance, a more realistic treatment of instrumental noise, adding another observing mode to LIFEsim that creates mock imaging data has lower priority at the moment. We encourage colleagues interested in science cases requiring an imaging mode for LIFE to create their own back-of-the-envelope estimates for the required sensitivity, spectral resolution and uv-coverage. This info can then be used as input when resources for the development of a proper LIFEsim module are available.

LIFE will primarily be sensitive to the effective temperature of the planets; whether surface temperatures can be measured or estimated depends strongly on the properties of the atmospheres. How cold a planet can be while still being detectable with LIFE depends on the assumed sensitivity (which in turn depends primarily on the aperture size and integration time). In LIFE paper 1 we have shown that in the search phase one can expect a few terrestrial planets as cold as 200 K to be detected.

We have an “Other Science” working group currently in the process of exploring this. From previous studies it is known that there can be compelling science cases for circumstellar disks and star formation, active galactic nuclei (AGN), and evolved stars. It has to be kept in mind, though, that the exoplanet science case will drive all requirements.

For the moment we have been focusing on a database that contains main-sequence stars out to 20 pc.  In LIFE paper 1 we describe in the appendix how this database that includes >1700 objects was created. The cut at 20 pc is arbitrary, but our yield simulations in paper 1 have shown that for the science case presented there, it is not necessary to go beyond 20 pc. In principle, objects beyond 20 pc can also be observed, but the further away an object is, the more observing time is needed. For reference: in LIFE paper 3, where we investigate how well an Earth-twin planet located at 10 pc can be characterized as a function of spectral resolution and signal-to-noise, we show that with four 2-m telescopes more than 40 days of integration time would be needed.

That depends on the final list of scientific objectives. For sure LIFE will look at nearby main sequence stars that are typically a few Gyr old in order to detect and characterize the planets orbiting these stars. One could imagine additional science cases related to younger stars and younger planets.

Yes, there were some studies about this made for DARWIN and the same conclusions hold for LIFE. We discuss this topic also in LIFE paper 1. The bottom line is that asymmetries in exozodi disks, such as geometric offsets from the central star or structural overdensities in the dust, do play a role, but are not expected to be a show-stopper. In particular, we now have quantitative estimates for the brightness distribution of exozodi disks from the HOSTS survey and the vast majority of disks is expected to be sufficiently faint so that also the signals of possible asymmetries are sufficiently weak.

Indeed, if a sufficiently large target sample is known that allows LIFE to fulfil (most of its) scientific objectives, the search phase could be shortened or completely skipped. However, at the moment, we need to be prepared to first invest some time in detecting the most promising targets.

A lot of molecules, including those that are relevant from an astrobiology perspective – so called “atmospheric biomarkers” – do have spectral features that fall into the spectral range of the LIFE mission (see below for some examples). However, whether they are indeed detectable with LIFE depends on their abundance in the atmosphere, where they are in the atmosphere, and whether there are other molecules that have absorption bands in the same wavelength range that might cover the molecule one is interested in. In addition, the detectability depends also on the spectral resolution of the instruments and on the signal-to-noise of the observation. So…it’s a bit complicated. However – and that is the key point – in principle LIFE can detect many molecules and will deliver unprecedented data on the atmospheric inventory of a large number of terrestrial exoplanets.

The collection of relevant data to optimize target prioritization and ultimately selection for the LIFE observation (search and characterization phase). Prior to the observation the extraction of target catalogs depending on different scenarios will allow us to make yield estimations to show the viability of the mission.

The database is for the main science case of observing the diversity and biosignatures from terrestrial planets. Secondary science cases (e.g. young planetary systems) will also be investigated with LIFE though they will have their own database or target list.

The database will be a relational one following IVOA guidelines for common standards in astronomy. Relational means that it consists of multiple tables rather than a single one like a catalog. Once it is fully implemented it is planned to be updated on a regular basis rather than having a version which stays constant for a long time. First it will only be accessible via virtual observatory tools (pyvo, TOPCAT,…) and only in the end phase will we look into making it nice and user-friendly accessible in a web application or GUI. 

So far we plan to  include as objects stars within 20pc from SIMBAD, planets around those from ExoMercat and disks around the stars from the database of Grant Kennedy. A large part of our data will come from GAIA. The sources for the individual parameters are not all decided yet. We plan to feed the LIFE target database from other databases or archives (if possible virtual observatory compatible ones) and otherwise from the literature. Only if important data is still missing then will we calculate or observe it ourselves. 

The current parameters considered can be accessed here: https://drive.google.com/file/d/1PbRomWQtpTUEItJUNk5hBpjQKr2q05n1/view?usp=sharing

 There are three steps involved in the development of a database: Design (parameters, sources, data structure), implementation (choose implementation software, acquire data, test the database) and evaluation (document, formulate improvements needed, share). After passing through those three steps the process starts anew in an iterative manner to improve on the previous version. Now in 2022 we are in the implementation phase for the first time.

 We currently don’t have instrument or technology limitations and including all stars that are possible would be too much. Instead we try to focus on the main science case and what is most promising for it. Adding data to the database afterwards, like increasing the distance or brightness limit is not difficult. So we could easily change the cutoff in a few years. For more information you may want to look at the answer to the LIFE Science FAQ “How many pc distance is the maximum distance for LIFE? How many stars are in that region?”.

A brightness limited database would cut out faint stars further away and instead include bright ones. This would, however, not lead to more earth like planets detected, as those far away stars still need more observation time. We have an implicit lower brightness cut as not all faint stars have been detected and the further away the more of them are missing. We are still considering if in addition to the 20pc distance cut we also cut out faint stars from a specific brightness on.

The goal of the database is to contain as much information as possible and the one of the catalog to keep just what is useful for the LIFEsim tool to create yield simulations. This means the catalog is a specific selection of data extracted from the database containing a list of stars with disk measurements where available (as this is better than the simulated values from the HOSTS model since this is what limits us in sensitivity), multiplicity flag (as there are few G and K singles within 20 pc so we will include some wide binaries) and rotation rate / line profile (to get proxies for the spin axis to determine if planet orbit would be face or edge on). As long as the database is not functional yet we use a temporary catalog as described in the paper “Large Interferometer For Exoplanets (LIFE): I. Improved exoplanet detection yield estimates for a large mid-infrared space-interferometer mission”, the list is available for viewing here:

https://drive.google.com/file/d/1JKV8dLfX2MU6XQ575D4PYVhCpUd74jHx/view?usp=sharing 

Find out more:
Science Mission