Skip to content

power_spectrum

PowerSpectrum

Compute the power spectrum of a light curve using the FFT.

This class accepts either a STELA LightCurve object or a trained GaussianProcess model. If a GaussianProcess is passed, the most recently generated samples are used. If no samples exist, the toolkit will automatically generate 1000 posterior realizations on a 1000-point grid.

For single light curves, the FFT is applied directly to the time series. For GP models, the power spectrum is computed for each sampled realization, and the mean and standard deviation across all samples are returned.

Power spectra are computed in variance units by default (i.e., normalized to units of squared flux), allowing for direct interpretation in the context of variability amplitude and fractional RMS.

Frequency binning is supported via linear, logarithmic, or user-defined bins.

Parameters:

Name Type Description Default
lc_or_model LightCurve or GaussianProcess

Input light curve or trained GP model.

required
fmin float or auto

Minimum frequency to include. If 'auto', uses the lowest nonzero FFT frequency.

'auto'
fmax float or auto

Maximum frequency to include. If 'auto', uses the Nyquist frequency.

'auto'
num_bins int

Number of frequency bins.

None
bin_type str

Binning type: 'log' or 'linear'.

'log'
bin_edges array - like

Custom bin edges (overrides num_bins and bin_type).

[]
norm bool

Whether to normalize the power spectrum to variance units (i.e., PSD units).

True

Attributes:

Name Type Description
freqs array - like

Center frequencies of each bin.

freq_widths array - like

Bin widths for each frequency bin.

powers array - like

Power spectrum values (or mean if using GP samples).

power_errors array - like

Uncertainties in the power spectrum (std across GP samples if applicable).

Source code in stela_toolkit/power_spectrum.py
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
class PowerSpectrum:
    """
    Compute the power spectrum of a light curve using the FFT.

    This class accepts either a STELA LightCurve object or a trained GaussianProcess model.
    If a GaussianProcess is passed, the most recently generated samples are used. 
    If no samples exist, the toolkit will automatically generate 1000 posterior realizations
    on a 1000-point grid.

    For single light curves, the FFT is applied directly to the time series.
    For GP models, the power spectrum is computed for each sampled realization,
    and the mean and standard deviation across all samples are returned.

    Power spectra are computed in variance units by default (i.e., normalized to units
    of squared flux), allowing for direct interpretation in the context of variability
    amplitude and fractional RMS.

    Frequency binning is supported via linear, logarithmic, or user-defined bins.

    Parameters
    ----------
    lc_or_model : LightCurve or GaussianProcess
        Input light curve or trained GP model.

    fmin : float or 'auto', optional
        Minimum frequency to include. If 'auto', uses the lowest nonzero FFT frequency.

    fmax : float or 'auto', optional
        Maximum frequency to include. If 'auto', uses the Nyquist frequency.

    num_bins : int, optional
        Number of frequency bins.

    bin_type : str, optional
        Binning type: 'log' or 'linear'.

    bin_edges : array-like, optional
        Custom bin edges (overrides `num_bins` and `bin_type`).

    norm : bool, optional
        Whether to normalize the power spectrum to variance units (i.e., PSD units).

    Attributes
    ----------
    freqs : array-like
        Center frequencies of each bin.

    freq_widths : array-like
        Bin widths for each frequency bin.

    powers : array-like
        Power spectrum values (or mean if using GP samples).

    power_errors : array-like
        Uncertainties in the power spectrum (std across GP samples if applicable).
    """

    def __init__(self,
                 lc_or_model,
                 fmin='auto',
                 fmax='auto',
                 num_bins=None,
                 bin_type="log",
                 bin_edges=[],
                 norm=True):

        # To do: ValueError for norm=True acting on mean=0 (standardized data)
        input_data = _CheckInputs._check_lightcurve_or_model(lc_or_model)
        if input_data['type'] == 'model':
            self.times, self.rates = input_data['data']
        else:
            self.times, self.rates, _ = input_data['data']
        _CheckInputs._check_input_bins(num_bins, bin_type, bin_edges)

        # Use absolute min and max frequencies if set to 'auto'
        self.dt = np.diff(self.times)[0]
        self.fmin = np.fft.rfftfreq(len(self.rates), d=self.dt)[1] if fmin == 'auto' else fmin
        self.fmax = np.fft.rfftfreq(len(self.rates), d=self.dt)[-1] if fmax == 'auto' else fmax  # nyquist frequency
        self.num_bins = num_bins
        self.bin_type = bin_type
        self.bin_edges = bin_edges
        self.norm = norm

        # if multiple light curve are provided, compute the stacked power spectrum
        if len(self.rates.shape) == 2:
            power_spectrum = self.compute_stacked_power_spectrum(norm=norm)
        else:
            power_spectrum = self.compute_power_spectrum(norm=norm)

        self.freqs, self.freq_widths, self.powers, self.power_errors = power_spectrum

    def compute_power_spectrum(self, times=None, rates=None, norm=True):
        """
        Compute the power spectrum for a single light curve.

        Applies the FFT to the light curve and optionally normalizes the result
        to variance (PSD) units. If binning is enabled, returns binned power.

        Parameters
        ----------
        times : array-like, optional
            Time array to use (defaults to internal value).

        rates : array-like, optional
            Rate array to use (defaults to internal value).

        norm : bool, optional
            Whether to normalize to variance units.

        Returns
        -------
        freqs : array-like
            Frequencies of the power spectrum.

        freq_widths : array-like or None
            Bin widths (if binned).

        powers : array-like
            Power spectrum values.

        power_errors : array-like or None
            Power spectrum uncertainties (if binned).
        """

        times = self.times if times is None else times
        rates = self.rates if rates is None else rates
        length = len(rates)

        freqs, fft = LightCurve(times=times, rates=rates).fft()
        powers = np.abs(fft) ** 2

        # Filter frequencies within [fmin, fmax]
        valid_mask = (freqs >= self.fmin) & (freqs <= self.fmax)
        freqs = freqs[valid_mask]
        powers = powers[valid_mask]

        if norm:
            powers /= length * np.mean(rates) ** 2 / (2 * self.dt)

        # Apply binning
        if self.num_bins or self.bin_edges:

            if self.bin_edges:
                bin_edges = FrequencyBinning.define_bins(self.fmin, self.fmax, num_bins=self.num_bins, 
                                                         bin_type=self.bin_type, bin_edges=self.bin_edges
                                                        )

            elif self.num_bins:
                bin_edges = FrequencyBinning.define_bins(self.fmin, self.fmax, num_bins=self.num_bins, bin_type=self.bin_type)

            else:
                raise ValueError("Either num_bins or bin_edges must be provided.\n"
                                 "In other words, you must specify the number of bins or the bin edges.")

            binned_power = FrequencyBinning.bin_data(freqs, powers, bin_edges)
            freqs, freq_widths, powers, power_errors = binned_power
        else:
            freq_widths, power_errors = None, None

        return freqs, freq_widths, powers, power_errors

    def compute_stacked_power_spectrum(self, norm=True):
        """
        Compute power spectrum for each GP sample and return the mean and std.
        This method is used automatically when a GP model with samples is passed.

        Parameters
        ----------
        norm : bool, optional
            Whether to normalize to variance units.

        Returns
        -------
        freqs : array-like
            Frequencies of the power spectrum.

        freq_widths : array-like
            Widths of frequency bins.

        power_mean : array-like
            Mean power spectrum values.

        power_std : array-like
            Standard deviation of power values across realizations.
        """

        powers = []
        for i in range(self.rates.shape[0]):
            power_spectrum = self.compute_power_spectrum(self.times, self.rates[i], norm=norm)
            freqs, freq_widths, power, _ = power_spectrum
            powers.append(power)

        # Stack the collected powers and errors
        powers = np.vstack(powers)
        power_mean = np.mean(powers, axis=0)
        power_std = np.std(powers, axis=0)

        return freqs, freq_widths, power_mean, power_std

    def fit(self, model_type='powerlaw', initial_params=None, lr=1e-3, max_iter=5000, tol=1e-6):
        r"""
        Fit the binned power spectrum using a maximum likelihood approach based on the Gamma distribution.

        This method assumes that each binned PSD value represents the average of M independent
        chi-squared-distributed powers (DOF=2), resulting in a Gamma distribution (DOF=2 chi-squared
        is an exponential, which is a Gamma, and the sum of exponentials is also a Gamma).
        The fit is performed by maximizing the corresponding Gamma likelihood.

        Supported models:
        - 'powerlaw':  
        $$
        P(f) = N \\cdot f^{-\\alpha}
        $$

        - 'powerlaw_lorentzian':  
        $$
        P(f) = N \\cdot f^{-\\alpha} + \\frac{R^2 \\cdot \\Delta / \\pi}{(f - f_0)^2 + \\Delta^2}
        $$  
        where \( R \) is the fractional rms amplitude of the QPO, \( f_0 \) is the central frequency,
        and \( \\Delta \) is the half-width at half-maximum (HWHM) of the Lorentzian.

        - 'bending_powerlaw':  
        $$
        P(f) = N \left[1 + \left(\frac{f}{f_{\\text{bend}}}\\right)^{\\alpha_{\\text{high}} - \\alpha_{\\text{low}}} \\right]^{-1} f^{-\\alpha_{\\text{low}}}
        $$

        The best-fit model type and parameters are stored as class attributes:
        - self.model_type : str  
            Name of the fitted model.
        - self.model_params : array-like  
            Optimized model parameters.

        Parameters
        ----------
        model_type : str, optional  
            Type of model to fit: 'powerlaw', 'powerlaw_lorentzian', or 'bending_powerlaw' (default: 'powerlaw').

        initial_params : list of float, optional  
            Initial guess for the model parameters. If None, reasonable defaults are chosen.

        lr : float, optional  
            Learning rate for the PyTorch Adam optimizer (default: 1e-3).

        max_iter : int, optional  
            Maximum number of gradient descent steps to run (default: 5000).

        tol : float, optional  
            Convergence tolerance on the change in negative log-likelihood (default: 1e-6).

        Returns
        -------
        result : dict  
            Dictionary with the following keys:
            - 'params': array-like, best-fit model parameters  
            - 'log_likelihood': float, maximum log-likelihood value at the solution
        """
        if self.freq_widths is not None:
            dof = 2 * self.count_frequencies_in_bins()
        else:
            dof = 2 * np.ones_like(self.freqs)

        freqs = torch.tensor(self.freqs, dtype=torch.float64)
        powers = torch.tensor(self.powers, dtype=torch.float64)
        k = torch.tensor(dof / 2, dtype=torch.float64)

        if initial_params is None:
            alpha_init = 2
            N_init = self.powers[0] / self.freqs[0] ** (-alpha_init)

        if model_type == 'powerlaw':
            if initial_params is None:
                initial_params = [N_init, alpha_init]

            log_N = torch.tensor(np.log(initial_params[0]), dtype=torch.float64, requires_grad=True)
            alpha = torch.tensor(initial_params[1], dtype=torch.float64, requires_grad=True)
            params = [log_N, alpha]

        elif model_type == 'powerlaw_lorentzian':
            if initial_params is None:
                R_init = np.std(self.powers)
                f0_init = np.median(self.freqs)
                delta_init = 0.1 * f0_init
                initial_params = [N_init, alpha_init, R_init, f0_init, delta_init]

            log_N = torch.tensor(np.log(initial_params[0]), dtype=torch.float64, requires_grad=True)
            alpha = torch.tensor(initial_params[1], dtype=torch.float64, requires_grad=True)
            log_R = torch.tensor(np.log(initial_params[2]), dtype=torch.float64, requires_grad=True)
            f0 = torch.tensor(initial_params[3], dtype=torch.float64, requires_grad=True)
            log_delta = torch.tensor(np.log(initial_params[4]), dtype=torch.float64, requires_grad=True)
            params = [log_N, alpha, log_R, f0, log_delta]

        elif model_type == 'bending_powerlaw':
            if initial_params is None:
                alpha_low_init = 1.5
                alpha_high_init = 2.5
                fbend_init = np.median(self.freqs)
                initial_params = [N_init, alpha_low_init, alpha_high_init, fbend_init]

            log_N = torch.tensor(np.log(initial_params[0]), dtype=torch.float64, requires_grad=True)
            alpha_low = torch.tensor(initial_params[1], dtype=torch.float64, requires_grad=True)
            alpha_high = torch.tensor(initial_params[2], dtype=torch.float64, requires_grad=True)
            log_fbend = torch.tensor(np.log(initial_params[3]), dtype=torch.float64, requires_grad=True)
            params = [log_N, alpha_low, alpha_high, log_fbend]

        else:
            raise ValueError(f"Unsupported model type '{model_type}'.")

        optimizer = optim.Adam(params, lr=lr)
        prev_loss = None

        for _ in range(max_iter):
            optimizer.zero_grad()
            N = torch.exp(log_N)

            if model_type == 'powerlaw':
                mu = N * freqs ** (-alpha)

            elif model_type == 'powerlaw_lorentzian':
                R = torch.exp(log_R)
                delta = torch.exp(log_delta)
                lorentz = (R**2 * delta / np.pi) / ((freqs - f0)**2 + delta**2)
                mu = N * freqs ** (-alpha) + lorentz

            elif model_type == 'bending_powerlaw':
                fbend = torch.exp(log_fbend)
                bend_factor = (1 + (freqs / fbend) ** (alpha_high - alpha_low)) ** (-1)
                mu = N * bend_factor * freqs ** (-alpha_low)

            logL = (
                k * torch.log(k / mu)
                + (k - 1) * torch.log(powers)
                - (k * powers / mu)
                - torch.tensor(gammaln(dof / 2), dtype=torch.float64)
            )
            loss = -torch.sum(logL)
            loss.backward()
            optimizer.step()

            if prev_loss is not None and abs(prev_loss - loss.item()) < tol:
                break
            prev_loss = loss.item()

        self.model_type = model_type
        if model_type == 'powerlaw':
            self.model_params = [torch.exp(log_N).item(), alpha.item()]
        elif model_type == 'powerlaw_lorentzian':
            self.model_params = [
                torch.exp(log_N).item(),
                alpha.item(),
                torch.exp(log_R).item(),
                f0.item(),
                torch.exp(log_delta).item()
            ]
        elif model_type == 'bending_powerlaw':
            self.model_params = [
                torch.exp(log_N).item(),
                alpha_low.item(),
                alpha_high.item(),
                torch.exp(log_fbend).item()
            ]

        return {
            'params': self.model_params,
            'log_likelihood': -loss.item()
        }

    def plot(self, step=False):
        """
        Plot the PSD and (if available) the best-fit model.

        Parameters
        ----------
        step : bool, optional
            If True, plot the unbinned PSD as a step function instead of points (default: False).
        """

        fig, ax = plt.subplots(figsize=(8, 4.5))

        if step:
            # Construct step function manually
            f = self.freqs
            p = self.powers
            f_step = np.repeat(f, 2)[1:]
            p_step = np.repeat(p, 2)[:-1]
            ax.plot(f_step, p_step, drawstyle='steps-pre', color='black', lw=1.5)
        else:
            ax.errorbar(
                self.freqs, self.powers,
                xerr=self.freq_widths,
                yerr=self.power_errors,
                fmt='o', ms=3, lw=1.5,
                color='black',
            )

        # Overlay best-fit model if user previously ran .fit()
        if hasattr(self, 'model_type') and hasattr(self, 'model_params'):
            freqs = np.linspace(self.freqs[0], self.freqs[-1], 1000)
            params = self.model_params

            if self.model_type == 'powerlaw':
                N, alpha = params
                model_vals = N * freqs**(-alpha)

            elif self.model_type == 'powerlaw_lorentzian':
                N, alpha, R, f0, delta = params
                lorentz = (R**2 * delta / np.pi) / ((freqs - f0)**2 + delta**2)
                model_vals = N * freqs**(-alpha) + lorentz

            elif self.model_type == 'bending_powerlaw':
                N, alpha_low, alpha_high, fbend = params
                bend_factor = (1 + (freqs / fbend)**(alpha_high - alpha_low))**(-1)
                model_vals = N * bend_factor * freqs**(-alpha_low)

            ax.plot(freqs, model_vals, linestyle='--', color='orange')

        ax.set_xscale('log')
        ax.set_yscale('log')
        ax.set_xlabel('Frequency', fontsize=12)

        ylabel = r"Power [$(\mathrm{rms}/\mathrm{mean})^2\,\mathrm{freq}^{-1}$]" \
            if self.norm else r"Power [$\mathrm{flux}^2\,\mathrm{freq}^{-1}$]"
        ax.set_ylabel(ylabel, fontsize=12)
        ax.grid(True, linestyle='--', linewidth=0.5, alpha=0.7)
        ax.tick_params(
            which='both', direction='in', length=6,
            width=1, top=True, right=True, labelsize=12
        )
        plt.show()

    def count_frequencies_in_bins(self, fmin=None, fmax=None, num_bins=None, bin_type=None, bin_edges=[]):
        """
        Counts the number of frequencies in each frequency bin.
        Wrapper method to use FrequencyBinning.count_frequencies_in_bins with class attributes.
        """

        return FrequencyBinning.count_frequencies_in_bins(
            self, fmin=fmin, fmax=fmax, num_bins=num_bins, bin_type=bin_type, bin_edges=bin_edges
        )

compute_power_spectrum(times=None, rates=None, norm=True)

Compute the power spectrum for a single light curve.

Applies the FFT to the light curve and optionally normalizes the result to variance (PSD) units. If binning is enabled, returns binned power.

Parameters:

Name Type Description Default
times array - like

Time array to use (defaults to internal value).

None
rates array - like

Rate array to use (defaults to internal value).

None
norm bool

Whether to normalize to variance units.

True

Returns:

Name Type Description
freqs array - like

Frequencies of the power spectrum.

freq_widths array - like or None

Bin widths (if binned).

powers array - like

Power spectrum values.

power_errors array - like or None

Power spectrum uncertainties (if binned).

Source code in stela_toolkit/power_spectrum.py
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
def compute_power_spectrum(self, times=None, rates=None, norm=True):
    """
    Compute the power spectrum for a single light curve.

    Applies the FFT to the light curve and optionally normalizes the result
    to variance (PSD) units. If binning is enabled, returns binned power.

    Parameters
    ----------
    times : array-like, optional
        Time array to use (defaults to internal value).

    rates : array-like, optional
        Rate array to use (defaults to internal value).

    norm : bool, optional
        Whether to normalize to variance units.

    Returns
    -------
    freqs : array-like
        Frequencies of the power spectrum.

    freq_widths : array-like or None
        Bin widths (if binned).

    powers : array-like
        Power spectrum values.

    power_errors : array-like or None
        Power spectrum uncertainties (if binned).
    """

    times = self.times if times is None else times
    rates = self.rates if rates is None else rates
    length = len(rates)

    freqs, fft = LightCurve(times=times, rates=rates).fft()
    powers = np.abs(fft) ** 2

    # Filter frequencies within [fmin, fmax]
    valid_mask = (freqs >= self.fmin) & (freqs <= self.fmax)
    freqs = freqs[valid_mask]
    powers = powers[valid_mask]

    if norm:
        powers /= length * np.mean(rates) ** 2 / (2 * self.dt)

    # Apply binning
    if self.num_bins or self.bin_edges:

        if self.bin_edges:
            bin_edges = FrequencyBinning.define_bins(self.fmin, self.fmax, num_bins=self.num_bins, 
                                                     bin_type=self.bin_type, bin_edges=self.bin_edges
                                                    )

        elif self.num_bins:
            bin_edges = FrequencyBinning.define_bins(self.fmin, self.fmax, num_bins=self.num_bins, bin_type=self.bin_type)

        else:
            raise ValueError("Either num_bins or bin_edges must be provided.\n"
                             "In other words, you must specify the number of bins or the bin edges.")

        binned_power = FrequencyBinning.bin_data(freqs, powers, bin_edges)
        freqs, freq_widths, powers, power_errors = binned_power
    else:
        freq_widths, power_errors = None, None

    return freqs, freq_widths, powers, power_errors

compute_stacked_power_spectrum(norm=True)

Compute power spectrum for each GP sample and return the mean and std. This method is used automatically when a GP model with samples is passed.

Parameters:

Name Type Description Default
norm bool

Whether to normalize to variance units.

True

Returns:

Name Type Description
freqs array - like

Frequencies of the power spectrum.

freq_widths array - like

Widths of frequency bins.

power_mean array - like

Mean power spectrum values.

power_std array - like

Standard deviation of power values across realizations.

Source code in stela_toolkit/power_spectrum.py
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
def compute_stacked_power_spectrum(self, norm=True):
    """
    Compute power spectrum for each GP sample and return the mean and std.
    This method is used automatically when a GP model with samples is passed.

    Parameters
    ----------
    norm : bool, optional
        Whether to normalize to variance units.

    Returns
    -------
    freqs : array-like
        Frequencies of the power spectrum.

    freq_widths : array-like
        Widths of frequency bins.

    power_mean : array-like
        Mean power spectrum values.

    power_std : array-like
        Standard deviation of power values across realizations.
    """

    powers = []
    for i in range(self.rates.shape[0]):
        power_spectrum = self.compute_power_spectrum(self.times, self.rates[i], norm=norm)
        freqs, freq_widths, power, _ = power_spectrum
        powers.append(power)

    # Stack the collected powers and errors
    powers = np.vstack(powers)
    power_mean = np.mean(powers, axis=0)
    power_std = np.std(powers, axis=0)

    return freqs, freq_widths, power_mean, power_std

count_frequencies_in_bins(fmin=None, fmax=None, num_bins=None, bin_type=None, bin_edges=[])

Counts the number of frequencies in each frequency bin. Wrapper method to use FrequencyBinning.count_frequencies_in_bins with class attributes.

Source code in stela_toolkit/power_spectrum.py
440
441
442
443
444
445
446
447
448
def count_frequencies_in_bins(self, fmin=None, fmax=None, num_bins=None, bin_type=None, bin_edges=[]):
    """
    Counts the number of frequencies in each frequency bin.
    Wrapper method to use FrequencyBinning.count_frequencies_in_bins with class attributes.
    """

    return FrequencyBinning.count_frequencies_in_bins(
        self, fmin=fmin, fmax=fmax, num_bins=num_bins, bin_type=bin_type, bin_edges=bin_edges
    )

fit(model_type='powerlaw', initial_params=None, lr=0.001, max_iter=5000, tol=1e-06)

Fit the binned power spectrum using a maximum likelihood approach based on the Gamma distribution.

This method assumes that each binned PSD value represents the average of M independent chi-squared-distributed powers (DOF=2), resulting in a Gamma distribution (DOF=2 chi-squared is an exponential, which is a Gamma, and the sum of exponentials is also a Gamma). The fit is performed by maximizing the corresponding Gamma likelihood.

Supported models: - 'powerlaw':
$$ P(f) = N \cdot f^{-\alpha} $$

  • 'powerlaw_lorentzian':
    $$ P(f) = N \cdot f^{-\alpha} + \frac{R^2 \cdot \Delta / \pi}{(f - f_0)^2 + \Delta^2} $$
    where \( R \) is the fractional rms amplitude of the QPO, \( f_0 \) is the central frequency, and \( \\Delta \) is the half-width at half-maximum (HWHM) of the Lorentzian.

  • 'bending_powerlaw':
    $$ P(f) = N \left[1 + \left(\frac{f}{f_{\text{bend}}}\right)^{\alpha_{\text{high}} - \alpha_{\text{low}}} \right]^{-1} f^{-\alpha_{\text{low}}} $$

The best-fit model type and parameters are stored as class attributes: - self.model_type : str
Name of the fitted model. - self.model_params : array-like
Optimized model parameters.

Parameters:

Name Type Description Default
model_type (str, optional)

Type of model to fit: 'powerlaw', 'powerlaw_lorentzian', or 'bending_powerlaw' (default: 'powerlaw').

'powerlaw'
initial_params list of float, optional

Initial guess for the model parameters. If None, reasonable defaults are chosen.

None
lr (float, optional)

Learning rate for the PyTorch Adam optimizer (default: 1e-3).

0.001
max_iter (int, optional)

Maximum number of gradient descent steps to run (default: 5000).

5000
tol (float, optional)

Convergence tolerance on the change in negative log-likelihood (default: 1e-6).

1e-06

Returns:

Name Type Description
result dict

Dictionary with the following keys: - 'params': array-like, best-fit model parameters
- 'log_likelihood': float, maximum log-likelihood value at the solution

Source code in stela_toolkit/power_spectrum.py
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
def fit(self, model_type='powerlaw', initial_params=None, lr=1e-3, max_iter=5000, tol=1e-6):
    r"""
    Fit the binned power spectrum using a maximum likelihood approach based on the Gamma distribution.

    This method assumes that each binned PSD value represents the average of M independent
    chi-squared-distributed powers (DOF=2), resulting in a Gamma distribution (DOF=2 chi-squared
    is an exponential, which is a Gamma, and the sum of exponentials is also a Gamma).
    The fit is performed by maximizing the corresponding Gamma likelihood.

    Supported models:
    - 'powerlaw':  
    $$
    P(f) = N \\cdot f^{-\\alpha}
    $$

    - 'powerlaw_lorentzian':  
    $$
    P(f) = N \\cdot f^{-\\alpha} + \\frac{R^2 \\cdot \\Delta / \\pi}{(f - f_0)^2 + \\Delta^2}
    $$  
    where \( R \) is the fractional rms amplitude of the QPO, \( f_0 \) is the central frequency,
    and \( \\Delta \) is the half-width at half-maximum (HWHM) of the Lorentzian.

    - 'bending_powerlaw':  
    $$
    P(f) = N \left[1 + \left(\frac{f}{f_{\\text{bend}}}\\right)^{\\alpha_{\\text{high}} - \\alpha_{\\text{low}}} \\right]^{-1} f^{-\\alpha_{\\text{low}}}
    $$

    The best-fit model type and parameters are stored as class attributes:
    - self.model_type : str  
        Name of the fitted model.
    - self.model_params : array-like  
        Optimized model parameters.

    Parameters
    ----------
    model_type : str, optional  
        Type of model to fit: 'powerlaw', 'powerlaw_lorentzian', or 'bending_powerlaw' (default: 'powerlaw').

    initial_params : list of float, optional  
        Initial guess for the model parameters. If None, reasonable defaults are chosen.

    lr : float, optional  
        Learning rate for the PyTorch Adam optimizer (default: 1e-3).

    max_iter : int, optional  
        Maximum number of gradient descent steps to run (default: 5000).

    tol : float, optional  
        Convergence tolerance on the change in negative log-likelihood (default: 1e-6).

    Returns
    -------
    result : dict  
        Dictionary with the following keys:
        - 'params': array-like, best-fit model parameters  
        - 'log_likelihood': float, maximum log-likelihood value at the solution
    """
    if self.freq_widths is not None:
        dof = 2 * self.count_frequencies_in_bins()
    else:
        dof = 2 * np.ones_like(self.freqs)

    freqs = torch.tensor(self.freqs, dtype=torch.float64)
    powers = torch.tensor(self.powers, dtype=torch.float64)
    k = torch.tensor(dof / 2, dtype=torch.float64)

    if initial_params is None:
        alpha_init = 2
        N_init = self.powers[0] / self.freqs[0] ** (-alpha_init)

    if model_type == 'powerlaw':
        if initial_params is None:
            initial_params = [N_init, alpha_init]

        log_N = torch.tensor(np.log(initial_params[0]), dtype=torch.float64, requires_grad=True)
        alpha = torch.tensor(initial_params[1], dtype=torch.float64, requires_grad=True)
        params = [log_N, alpha]

    elif model_type == 'powerlaw_lorentzian':
        if initial_params is None:
            R_init = np.std(self.powers)
            f0_init = np.median(self.freqs)
            delta_init = 0.1 * f0_init
            initial_params = [N_init, alpha_init, R_init, f0_init, delta_init]

        log_N = torch.tensor(np.log(initial_params[0]), dtype=torch.float64, requires_grad=True)
        alpha = torch.tensor(initial_params[1], dtype=torch.float64, requires_grad=True)
        log_R = torch.tensor(np.log(initial_params[2]), dtype=torch.float64, requires_grad=True)
        f0 = torch.tensor(initial_params[3], dtype=torch.float64, requires_grad=True)
        log_delta = torch.tensor(np.log(initial_params[4]), dtype=torch.float64, requires_grad=True)
        params = [log_N, alpha, log_R, f0, log_delta]

    elif model_type == 'bending_powerlaw':
        if initial_params is None:
            alpha_low_init = 1.5
            alpha_high_init = 2.5
            fbend_init = np.median(self.freqs)
            initial_params = [N_init, alpha_low_init, alpha_high_init, fbend_init]

        log_N = torch.tensor(np.log(initial_params[0]), dtype=torch.float64, requires_grad=True)
        alpha_low = torch.tensor(initial_params[1], dtype=torch.float64, requires_grad=True)
        alpha_high = torch.tensor(initial_params[2], dtype=torch.float64, requires_grad=True)
        log_fbend = torch.tensor(np.log(initial_params[3]), dtype=torch.float64, requires_grad=True)
        params = [log_N, alpha_low, alpha_high, log_fbend]

    else:
        raise ValueError(f"Unsupported model type '{model_type}'.")

    optimizer = optim.Adam(params, lr=lr)
    prev_loss = None

    for _ in range(max_iter):
        optimizer.zero_grad()
        N = torch.exp(log_N)

        if model_type == 'powerlaw':
            mu = N * freqs ** (-alpha)

        elif model_type == 'powerlaw_lorentzian':
            R = torch.exp(log_R)
            delta = torch.exp(log_delta)
            lorentz = (R**2 * delta / np.pi) / ((freqs - f0)**2 + delta**2)
            mu = N * freqs ** (-alpha) + lorentz

        elif model_type == 'bending_powerlaw':
            fbend = torch.exp(log_fbend)
            bend_factor = (1 + (freqs / fbend) ** (alpha_high - alpha_low)) ** (-1)
            mu = N * bend_factor * freqs ** (-alpha_low)

        logL = (
            k * torch.log(k / mu)
            + (k - 1) * torch.log(powers)
            - (k * powers / mu)
            - torch.tensor(gammaln(dof / 2), dtype=torch.float64)
        )
        loss = -torch.sum(logL)
        loss.backward()
        optimizer.step()

        if prev_loss is not None and abs(prev_loss - loss.item()) < tol:
            break
        prev_loss = loss.item()

    self.model_type = model_type
    if model_type == 'powerlaw':
        self.model_params = [torch.exp(log_N).item(), alpha.item()]
    elif model_type == 'powerlaw_lorentzian':
        self.model_params = [
            torch.exp(log_N).item(),
            alpha.item(),
            torch.exp(log_R).item(),
            f0.item(),
            torch.exp(log_delta).item()
        ]
    elif model_type == 'bending_powerlaw':
        self.model_params = [
            torch.exp(log_N).item(),
            alpha_low.item(),
            alpha_high.item(),
            torch.exp(log_fbend).item()
        ]

    return {
        'params': self.model_params,
        'log_likelihood': -loss.item()
    }

plot(step=False)

Plot the PSD and (if available) the best-fit model.

Parameters:

Name Type Description Default
step bool

If True, plot the unbinned PSD as a step function instead of points (default: False).

False
Source code in stela_toolkit/power_spectrum.py
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
def plot(self, step=False):
    """
    Plot the PSD and (if available) the best-fit model.

    Parameters
    ----------
    step : bool, optional
        If True, plot the unbinned PSD as a step function instead of points (default: False).
    """

    fig, ax = plt.subplots(figsize=(8, 4.5))

    if step:
        # Construct step function manually
        f = self.freqs
        p = self.powers
        f_step = np.repeat(f, 2)[1:]
        p_step = np.repeat(p, 2)[:-1]
        ax.plot(f_step, p_step, drawstyle='steps-pre', color='black', lw=1.5)
    else:
        ax.errorbar(
            self.freqs, self.powers,
            xerr=self.freq_widths,
            yerr=self.power_errors,
            fmt='o', ms=3, lw=1.5,
            color='black',
        )

    # Overlay best-fit model if user previously ran .fit()
    if hasattr(self, 'model_type') and hasattr(self, 'model_params'):
        freqs = np.linspace(self.freqs[0], self.freqs[-1], 1000)
        params = self.model_params

        if self.model_type == 'powerlaw':
            N, alpha = params
            model_vals = N * freqs**(-alpha)

        elif self.model_type == 'powerlaw_lorentzian':
            N, alpha, R, f0, delta = params
            lorentz = (R**2 * delta / np.pi) / ((freqs - f0)**2 + delta**2)
            model_vals = N * freqs**(-alpha) + lorentz

        elif self.model_type == 'bending_powerlaw':
            N, alpha_low, alpha_high, fbend = params
            bend_factor = (1 + (freqs / fbend)**(alpha_high - alpha_low))**(-1)
            model_vals = N * bend_factor * freqs**(-alpha_low)

        ax.plot(freqs, model_vals, linestyle='--', color='orange')

    ax.set_xscale('log')
    ax.set_yscale('log')
    ax.set_xlabel('Frequency', fontsize=12)

    ylabel = r"Power [$(\mathrm{rms}/\mathrm{mean})^2\,\mathrm{freq}^{-1}$]" \
        if self.norm else r"Power [$\mathrm{flux}^2\,\mathrm{freq}^{-1}$]"
    ax.set_ylabel(ylabel, fontsize=12)
    ax.grid(True, linestyle='--', linewidth=0.5, alpha=0.7)
    ax.tick_params(
        which='both', direction='in', length=6,
        width=1, top=True, right=True, labelsize=12
    )
    plt.show()