Quickly jump to


In order to achieve the Science Objectives a space-based mission is required, avoiding the perturbing effects (turbulence and thermal emission) of the Earth’s atmosphere. As detecting the thermal emission of exoplanets requires observations at micrometer wavelengths (the thermal emission of Earth peaks around 10 micrometer), the current project baseline foresees a nulling interferometer concept consisting of several formation flying collector telescopes with a beam combiner spacecraft at their center. The measurement principle is both simple and elegant: the instrument acts as a spectrograph from which the stellar flux is removed by destructive interference, while the signal from any orbiting planet is transmitted to the output of the instrument. This allows the nulling interferometer to observe non-transiting exoplanets and build up signal-to-noise quickly without having to launch a very large, single aperture telescope. The extremely good inner working angle,  i.e., the high spatial resolution required to separate the exoplanets’ signal from that of the host star on the detector, is provided by the separation between the collector telescope, which, in principle, can be adjusted depending on the science requirements. In addition to the nulling mode, LIFE could also be operated in constructive imaging mode for other science cases.


LIFE is based on the heritage of older mission concepts such as ESA’s Darwin or NASA’s TPF-I . In contrast to the mid 2000s when those concepts were studied, we today have a much more comprehensive understanding of the existing exoplanet population, allowing a much more robust quantification of science objectives, science requirements and expected science yield.

In spring 2020 we kicked off NICE – the Nulling Interferometric Cryogenic Experiment for LIFE. The goal of NICE is to build a nulling testbed to promote LIFE and to enhance the technology readiness level of broadband nulling interferometry for LIFE and ground based nullers. The long term goal of NICE is to demonstrate the required nulling performance in terms of starlight suppression and stability under cryogenic conditions and with flux levels similar to those from real astrophysical sources. Details about NICE can be found here.


In addition, most technologies required to fly a space-based nulling interferometer have now reached a Technology Readiness Level (TRL) of 5, which means that the components have been tested and validated in their relevant environment. In particular, key technologies that were considered immature in 2007, when most ESA/Darwin and NASA/TPF-I activities came to a hold, have now been demonstrated on test-benches (e.g., deep nulling) or in space (e.g., formation flying). More information is provided here or here.


A comprehensive assessment of all major technologies is currently underway, which will serve as basis for the development of a technology development roadmap.


No, previous space missions covered a similar wavelength regime as we are planning for LIFE. This includes for instance the Spitzer Space Telescope, but also even older missions such as ISO, IRAS, AKARI (albeit not all covering exactly the same wavelength range and spectroscopic capabilities). Also, the soon-to-be-launched James Webb Space Telescope (JWST) will feature a powerful MIR instrument (MIRI) that will cover the LIFE wavelength range. However, all of these missions were designed with different primary scientific objectives in mind. Most importantly, they all lack the spatial resolution to directly detect large numbers of planets in the HZs around nearby stars. JWST, with it’s 6.5-m primary mirror,  might be able to search for rocky planets around a couple of nearby stars, but will not provide a large sample of directly detected planets. The Origin Space Telescope (OST), which is currently being assessed in the context of the NASA decadal survey, would also be a MIR mission with a primary mirror even larger than JWST. However, OST would primarily focus on transiting exoplanets around (nearby) M-stars and would not be an exoplanet imaging mission, again limiting the accessible parameter space compared to LIFE.

We proposed the ‘science of LIFE’ to ESA in the context of the currently ongoing Voyage2050 process (see, https://www.cosmos.esa.int/web/voyage-2050). This process is supposed to guide ESA’s long-term planning of the Science Program for the years ~2035-2050. Dozens of white papers were submitted covering a wide range of planetary science, astrophysics, solar physics and fundamental physics (the list of white papers is published on the web page). The outcome of the process will be – primarily – a selection of science themes / topics that will be considered for future L-class (L=large) missions. L-class missions are the biggest missions ESA carries out (typical budget up to ~1.2 BEUR) and because of their complexity they typically require the longest lead times. In case ESA selects “Exoplanets” as a science theme for a future L-mission, it is not clear at the moment how they would continue the process and start to think about potential mission concepts (e.g., how to decide, for instance, between joining one of the NASA concepts or coming up with something on their own). As written on the web page above “Our report to the Director of Science will be delivered in mid-May and a public report will be issued later this year”.

In the early 2000s the idea of a space-based mid-infrared nulling interferometer for exoplanet science was already studied by both ESA (the Darwin mission) and NASA (the TPF-I mission (TPF-I = Terrestrial Planet Finder – Interferometer)). LIFE is built on the heritage of these efforts and we take into account all their results and learnings and basically continue from where they stopped. Yet, given that our understanding about exoplanets has dramatically increased since the 2000s and also on the technology side progress has been made, we have to carry out some trade-off analysis to develop a new mission baseline. This is one of the first steps – defining a new baseline for the instrument and the mission.

From our understanding,  one of the main concerns at that time was the unknown number of planets and hence potential targets that Darwin could investigate. By that time (that was before the Kepler and TESS missions!) planetary occurrences rates were basically unknown and the lowest mass object discovered was a Super-Earth of a few Earth masses on a short period orbit. Without knowing if there even was (a chance for) an Earth-similar planet in our neighborhood was considered too high risk. Certainly there were also technical challenges  (e.g., related to formation flying) that came on-top. For LIFE, things look much better. We can make quantitative statistical predictions for the expected exoplanet detection yield thanks to NASA’s Kepler mission. And thanks to ongoing RV surveys from the ground we even have nearby targets one could immediately look at today. Also on the technology side, significant progress was made over the last ~15 years and some key results are summarized here or here.

The NASA HabEx and LUVOIR mission concepts have the same primary science objectives as LIFE: the direct detection of (small) exoplanets and the investigation of their atmospheres incl. the search for biosignatures. The main difference is that HabEx /LUVOIR will work at UV/optical/near-infrared wavelengths (possibly up to ~2.5 micron) which will allow them to detect the planets in reflected light, while LIFE will work at mid-infrared wavelengths probing the thermal emission of exoplanets. Both approaches have pros and cons. For instance, the vegetation red edge or ocean glint is only detectable in reflected light, while observations in thermal emission provide better information about the atmospheric structure, the radius of the planet and potentially its surface temperature. In this sense, it is not a question of better or worse; these approaches are complementary and synergistic and together allow for a much better understanding about the (atmospheric) properties of terrestrial exoplanets.

The number of collector spacecraft is not defined yet, but if it is higher than 2 the nulling technique can more effectively suppress the stellar light and it is also easier to discriminate axis-symmetric emission (e.g., exo-zodiacal dust) from asymmetric emission (e.g., exoplanet). In addition, the total collecting area of LIFE is important as this directly affects the sensitivity of the mission. Trade-off studies in the coming months will help us define the number of collector spacecraft and a baseline for the mission concept.

(Taken from here incl. slight modifications) Nulling interferometry is the name given to an instrumental technique based on interference between several telescopes / apertures and aiming at detecting directly exoplanets. The principle is to create a virtual “blind spot” at the exact location of a bright source, a star, in order to reveal the much fainter source, which is a planet orbiting it. Designing a nulling interferometer supposes that the beams from two (or more) large telescopes, separated by baselines of several meters to hundreds of meters, are combined coherently and that a special device, called an achromatic phase shifter, is introduced in some of the beams before recombination. This phase shifter will create a phase shift of , so that when 2 beams are combined, light from any on-axis source (i.e., a source that has an optical path difference (OPD) of zero) interferes destructively. This on-axis source is the star. Because a nearby exoplanet will not have an OPD of zero, its light is – at least partially – transmitted. In addition, since the expected brightness of the exoplanet is extremely low, the usually strong thermal background emission that characterizes observations in the mid-infrared has to be reduced: this means that the telescopes must be at a low temperature, typically below 100 K. If we add that the Earth atmosphere absorbs a large part of the mid-infrared wavelength range and that it also contributes to the thermal background, one understands that the space environment is mandatory. Finally, a very precise control of wavefront defects and a precise matching of the interferometer arms are required to effectively suppress the star light and achieve the required “null depth”.

Phase chopping is a calibration technique. In nulling interferometry there are multiple ways how the beams from the various apertures can be combined (even for the same number of apertures (e.g, four) different beam combination schemes are possible). Phase chopping is a special technique that can only be applied in specific beam combination schemes such as a Double-Bracewell Interferometer. Phase chopping basically flips the phase shift that is introduced between the two arms of the Double-Bracewell interferometer back and forth on time scales of seconds to minutes. This is much shorter than the typical integration times needed in order to detect small exoplanets. In doing so, it can help calibrate out long-term drifts (i.e., low frequency noise) within the instrument.

No, strictly speaking, rotating the array may not be absolutely necessary. However, it was shown in the past that such a rotation introduces a temporal modulation of the planet signal that appears to be very robust against instrumental noise effects. This means that the planetary signal can be more easily extracted from the data and it is easier to discriminate from background sources (e.g., asymmetric exozodi disks). One of the first steps in the coming weeks and months is to re-assess whether there are other ways to generate robust modulated signals (e.g., stretching / shrinking of the array) and what the impact of the rotation on fuel and stability requirements is.

(see also above “How does nulling interferometry work?”) The term “nulling” refers to the fact that the light of the central star is effectively suppressed (i.e., “nulled”) so that the significantly fainter signals for nearby planets become detectable. Temperate, terrestrial exoplanets are typically 107-108 times fainter at MIR wavelengths than Solar-type stars. Effective nulling is difficult because it requires accurate control of the amplitude (likely to better than 1%) and phase (likely to better than a few nm, i.e., a fraction of the wavelength) of the beams that are combined.

While interferometric nulling at mid-infrared wavelength was successfully applied at the Multiple Mirror Telescope (MMT) with the BLINC instrument, the Keck Interferometer Nuller (KIN) and the Large Binocular Telescope Interferometer (LBTI) to search for exozodiacal dust disks, thus far no exoplanets were detected using the nulling technique. However, the Hi5 project, currently under development for the Very Large Telescope Interferometer (VLTI), aims to achieve this for a small set of favorable gas giants. Some long-period gas giant planets were already detected and characterized at the VLTI with the Gravity+ instrument, but even though this is also an interferometric instrument, it does not rely on nulling interferometry.

By going to the mid-infrared regime one probes the thermal emission of the exoplanets (in contrast to investigating exoplanets in reflected light, which is done at optical/near-infrared wavelength). For reference: the thermal emission of Earth peaks at a wavelength of roughly ~11 micron. Measuring the thermal emission of an exoplanet provides constraints on its effective temperature and radius, which is not so easily done in reflected light. Also, the thermal structure of the atmosphere can be probed and – depending on the atmospheric composition – one may be able to directly constrain the surface temperature; something out of reach for reflected light missions. Furthermore, the mid-infrared range features absorption bands of many molecules incl. major constituents of terrestrial exoplanet atmospheres, trace gases as well as bio- and technosignatures.

Extending the wavelength range to shorter wavelength poses significant technical challenges for a nulling interferometer because the contrast requirement becomes more demanding, which, in turn, leads to even more stringent requirements for the phase and amplitude stability for beam combination. At the same time, because at shorter wavelength large, single aperture telescopes can provide sufficient spatial resolution to resolve the planet from its host star, the need to employ an interferometer is no longer given. That’s why NASA’s HabEx and LUVOIR concept studies feature large, contrast-optimized single aperture telescopes for the direct detection of exoplanets in reflected light.

For a mid-infrared mission that has the goal to detect the very faint signals of temperate exoplanets going to space is essential. Ground-based observations at mid-infrared wavelength suffer from significant background noise contributions from the telescope, the instrument optics and also the Earth atmosphere: everything “shines” in the mid-infrared significantly limiting the achievable sensitivity from the ground. In addition, the Earth atmosphere is not fully transparent across the mid-infrared regime. Water, carbon dioxide, ozone and other trace gases ‘block’ our view and only from space we have access to the full wavelength range.

The required number of apertures for LIFE and their diameter is still being investigated and will depend on the final scientific objectives. Our analyses have shown that in many cases LIFE will be sensitivity limited by background emission from the local zodiacal dust cloud in the Solar System, which is more severe for smaller aperture sizes.In general, smaller apertures lead to lower sensitivity and hence to longer integration times for both the detection of an exoplanet and also for obtaining a high signal-to-noise spectrum. Or, to turn this around, larger apertures will not only lead to more detections, but will also allow to obtain more high SNR spectra in the follow-up. In LIFE Paper 1, and also here, detection yield calculations were carried out for a range of aperture sizes providing a good overview of what can be achieved.

LIFEsim is in continuous development and new features will be regularly added in the coming months. It will be publicly available at https://github.com/fdannert/LIFEsim. At the moment, LIFEsim features the most relevant astrophysical noise sources, such as geometric stellar leakage, thermal background noise from the local zodiacal dust cloud and from exozodi disks, and the shot noise of the planet spectrum. It also allows the user to specify the distance to the target, the spectral type of the host star and the diameter of the apertures. Instrumental noise terms, e.g., detector noise, thermal background from the optics, instability noise, are not yet included. Also, at the moment LIFEsim features only the current reference architecture (4 collector telescopes flying in a rectangular configuration); in the context of ongoing trade-off studies more architectures may be added in the near future.

In comparison to including, for instance, a more realistic treatment of instrumental noise, adding another observing mode to LIFEsim that creates mock imaging data has lower priority at the moment. We encourage colleagues interested in science cases requiring an imaging mode for LIFE to create their own back-of-the-envelope estimates for the required sensitivity, spectral resolution and uv-coverage. This info can then be used as input when resources for the development of a proper LIFEsim module are available.

LIFE will primarily be sensitive to the effective temperature of the planets; whether surface temperatures can be measured or estimated depends strongly on the properties of the atmospheres. How cold a planet can be while still being detectable with LIFE depends on the assumed sensitivity (which in turn depends primarily on the aperture size and integration time). In LIFE paper 1 we have shown that in the search phase one can expect a few terrestrial planets as cold as 200 K to be detected.

We have an “Other Science” working group currently in the process of exploring this. From previous studies it is known that there can be compelling science cases for circumstellar disks and star formation, active galactic nuclei (AGN), and evolved stars. It has to be kept in mind, though, that the exoplanet science case will drive all requirements.

For the moment we have been focusing on a database that contains main-sequence stars out to 20 pc.  In LIFE paper 1 we describe in the appendix how this database that includes >1700 objects was created. The cut at 20 pc is arbitrary, but our yield simulations in paper 1 have shown that for the science case presented there, it is not necessary to go beyond 20 pc. In principle, objects beyond 20 pc can also be observed, but the further away an object is, the more observing time is needed. For reference: in LIFE paper 3, where we investigate how well an Earth-twin planet located at 10 pc can be characterized as a function of spectral resolution and signal-to-noise, we show that with four 2-m telescopes more than 40 days of integration time would be needed.

That depends on the final list of scientific objectives. For sure LIFE will look at nearby main sequence stars that are typically a few Gyr old in order to detect and characterize the planets orbiting these stars. One could imagine additional science cases related to younger stars and younger planets.

Yes, there were some studies about this made for DARWIN and the same conclusions hold for LIFE. We discuss this topic also in LIFE paper 1. The bottom line is that asymmetries in exozodi disks, such as geometric offsets from the central star or structural overdensities in the dust, do play a role, but are not expected to be a show-stopper. In particular, we now have quantitative estimates for the brightness distribution of exozodi disks from the HOSTS survey and the vast majority of disks is expected to be sufficiently faint so that also the signals of possible asymmetries are sufficiently weak.

Indeed, if a sufficiently large target sample is known that allows LIFE to fulfil (most of its) scientific objectives, the search phase could be shortened or completely skipped. However, at the moment, we need to be prepared to first invest some time in detecting the most promising targets.

A lot of molecules, including those that are relevant from an astrobiology perspective – so called “atmospheric biomarkers” – do have spectral features that fall into the spectral range of the LIFE mission (see below for some examples). However, whether they are indeed detectable with LIFE depends on their abundance in the atmosphere, where they are in the atmosphere, and whether there are other molecules that have absorption bands in the same wavelength range that might cover the molecule one is interested in. In addition, the detectability depends also on the spectral resolution of the instruments and on the signal-to-noise of the observation. So…it’s a bit complicated. However – and that is the key point – in principle LIFE can detect many molecules and will deliver unprecedented data on the atmospheric inventory of a large number of terrestrial exoplanets.

Collection of relevant data to optimize target prioritization and ultimately selection for both the LIFE observation (search and characterization phase). Prior to the observation the extraction of target catalogs depending on different scenarios will allow us to make yield estimations to show the viability of the mission.

The database is for the main science case of observing the diversity and biosignatures from terrestrial planets. Secondary science cases (e.g. young planetary systems) are also investigated though they will have their own database or target list.

The included objects are:

  • stars within 20pc from SIMBAD
  • planets around those stars from ExoMercat
  • and disks around the stars from …(TBD)

The included parameters are shown here.

The cutoff at 20pc is somewhat arbitrary and we can think about increasing it by 5pc or so but when we look at the yield results in LIFE paper 1 the selected stars in the optimization procedures barely go out to 15pc. This is because we are sensitivity limited due to local zodi and exozodi, so if we want to detect and/or characterize a planet at a star far away it will take much more observation time to achieve it. As the goal is to detect and characterize as many planets as possible it makes more sense to observe a few nearby ones than one single faraway one.

This does not mean that we won’t look at any star outside of 20pc. We will, however, require a good reason e.g. observe young planets or Earth twin already detected. Those will then not be targets of the main science case but of the other science ones.

Note also, that adding data to the database afterwards, like increasing the distance limit is not difficult. So we could easily change the cutoff in a few years.

We currently don’t have instrument / technology limitations, instead it works the other way round, the science drives the instrument requirements. Only later will the instrument team be able to tell us what is feasible.

The catalog is where the decision tree comes in, if it is worth looking at something beyond 20 pc if it is special enough.

The target list currently used for yield simulations is described in the first of the upcoming serie of LIFE papers currently available at arxiv. The actual list can be viewed here.

Most recent presentation can be found here.

Some sources are already decided e.g. where possible we will take stellar data from SIMBAD, exoplanet data from ExoMercat.. For that we are currently looking into useful catalogs e.g.

  • The TESS Input Catalog (TIC) should be complete for all sources within 20 pc, as long as the objects exist in Gaia DR2 and/or 2MASS. This should provide most of the needed stellar parameters, and this could be cross-matched with lists of known exoplanets for planet parameters.
  • The PLATO Input Catalog (PIC) is going to release its first all-sky version within a few weeks, and it is expected to have a cross-match to known exoplanets already done with regular updates.
  • There is a NASA Study Analysis Group (SAG) #22 that is developing a catalog of promising targets for exoplanet followup studies: https://sites.google.com/view/sag22/task-forces?authuser=0

The database contains as much as possible and the catalog is where we select out to just keep the useful. This means the catalog contains a list of stars with disk measurements where available (as this is better than the simulated values from the HOSTS model since this is what limits us in sensitivity), multiplicity flag (as there are few G and K singles within 20 pc so we will include some wide binaries) and rotation rate / line profile (to get proxies for the spin axis to determine if planet orbit would be face or edge on).

Partake in the discussions in the working group meetings or slack channel. If you really want to invest time you can volunteer for the work packages described in the slides from the current status question.

Formulating the requirements, designing the database and building the application. We are currently at the end of the design phase.

From databases (if possible VO compatible) otherwise papers and in the last instance we will calculate the needed data ourselves.

It will be a relational database following IVOA guidelines. First it will only be accessible via the VO tools (pyvo, TOPCAT,…) and only in the end phase we will look into making it nice and user friendly accessible in a web application or GUI.

Only if something important is missing. For example if some of our best candidate stars have no disk measurements then we might make an observation proposal for them.

Find out more:
Science Mission