Hi,

We are trying to write an experimental sequence that uses artiq ttl input to time tag the incoming pulses in each experiment cycle (like what a time tagger does). Meanwhile, in each cycle, we are also using ttl output to send out some trigger pulses.

Here's our simplified code:

                with parallel:
                    for bin_ in range(n_bins):
                        self.core.break_realtime()
                        t_count = self.ttl2.gate_rising(t_bins*ns)
                        count = self.ttl2.count(t_count)
                        self.time_tag(bin_,count)
                    with sequential:
                        self.core.break_realtime()
                        #### experiment trigger pulses
                        self.ttl8.on()
                        delay(t_load*us)
                        ...... 
            self.update_timetag_hist()

However, we have found that if we set the bin width (t_bins) to be less than 1us, the actual run time is much longer than the expected experiment sequence duration for a single loop. (For a 100ns bin width, each run takes 14s for a 3s sequence for a 100ns bin width). This also makes us worry about the time synchronization between the two parallel blocks.

I was wondering whether there was any advice as to the best way to achieve this time tag task with artiq ?

Thank you for your help in advance.

Instead of using very short input sensitivity windows (gate_rising) and count, just use one long window and call timestamp_mu in a loop to pull out the arrival times of the individual pulses (which will return -1 once there are no more left).

By the way, the two with parallel: branches in your example wouldn't actually run in parallel as you probably expect due to the break_realtime()s. Assuming that you want to time-tag incoming events during your pulse sequence, you'll want to emit all the output events first and only afterwards (in terms of Python program order) do the blocking timestamp readback.

2 years later

Do you mean do something like this:

for i in range(100):
    timestamp[i] = self.ttl0.timestamp_mu(10 * ns)

So the experiment would run for 1000 ns (assuming this code wouldn't have underflow or overflow issues), where each timestamp measures if there was a pulse in a 10 ns window?

I'm still trying to figure out the best way to use ttl.timestamp_mu ๐Ÿ™‚

Just read the documentation properly, it's more like

for i in range(self.repetitions):
    self.timestamp[i] = self.ttl0.timestamp_mu(self.ttl0.gate_rising(10 * ns))

Yes โ€“ and what I mentioned before is that if what you actually intend to do is to just get all the arrival times over, say, 100 ยตs, you can just have one long gate window, and read the timestamps out afterwards. Something like (pseudocode):

gate_end_mu = self.ttl0.gate_rising(100 * us)
num_edges = 0
while num_edges < len(self.timestamp):
  t = self.ttl0.timestamp_mu(gate_end_mu)
  if t == -1:
    break
  self.timestamp[num_edges] = t
  num_edges += 1

    dpn That's great, thank you for clarifying!

    This works well for time tagging ๐Ÿ™‚