I am using two TTL channels as edge_counters to measure the flourescence of two ions simultaneously. I followed the documentation in the manual (https://m-labs.hk/artiq/manual/core_drivers_reference.html#module-artiq.coredevice.edge_counter) and arrived at the following minimum working example:

class GenericTest(EnvExperiment):
    
    def build(self):
        self.setattr_device("core")
        self.setattr_device("ccb")
        self.setattr_device("scheduler")
        self.setattr_device("ttl0_counter")
        self.setattr_device("ttl1_counter")
        
        self.setattr_argument("t_accu", NumberValue(
            default = 100*ms,
            unit = "ms"))
    
    def prepare(self):
        self.ccb.issue("create_applet", "counts_0", 
            "${artiq_applet}plot_xy test.counts_0", ["Test"])
        self.set_dataset("test.counts_0", [0], broadcast=True)
        self.ccb.issue("create_applet", "counts_1", 
            "${artiq_applet}plot_xy test.counts_1", ["Test"])
        self.set_dataset("test.counts_1", [0], broadcast=True)
    
    def run(self):
        self.run_kernel()
    
    @kernel
    def run_kernel(self):
        while True:
            self.core.break_realtime()
            with parallel:
                self.ttl0_counter.gate_rising(self.t_accu)
                self.ttl1_counter.gate_rising(self.t_accu)
                
            counts_0 = self.ttl0_counter.fetch_count()
            counts_1 = self.ttl1_counter.fetch_count()
            if self.cb(counts_0, counts_1):
                break
    
    def check_stop(self) -> TBool:
        try:
            self.scheduler.pause()
        except TerminationRequested:
            return True
        return False
        
    def cb(self, counts_0, counts_1) -> TBool:
        self.append_to_dataset("test.counts_0", counts_0)
        self.append_to_dataset("test.counts_1", counts_1)
        
        try:
            self.scheduler.pause()
        except TerminationRequested:
            return True
        return False

When I start the experiment, I see the expected number of counts with the expected sampling speed (i. e. for 100 ms integration time, I have a sample interval of roughly 100 ms, instead of 200 ms). When I block the common repump path of the ions, I see the drop in fluorescence on both counters, but counter_0 is delayed by roughly 10 samples. This is independent of the actual sample rate, so the delay is the same for 100 ms integration time and 10 ms integration time. See the following plots:

In addition, when I stop the experiment with e.g. 100 ms integration time and I start it again with 10 ms integration time, I see around 10 samples on counter_0 that correspond to the first integration time (100 ms), then it drops to the correct value. This behaviour is independent of the order I put the gate_rising and fetch_count calls in the experiment. counter_0 is always delayed with respect to counter_1.

If I understand the documentation correctly, all remaining output events should be cleared, when a new gate_rising is called, right (https://m-labs.hk/artiq/manual/core_drivers_reference.html#artiq.coredevice.edge_counter.EdgeCounter.gate_both)? On the other hand, it is mentioned, that multiple subsequent gate events can be queued up and read out sequentially, so maybe there are some lingering gates (https://m-labs.hk/artiq/manual/core_drivers_reference.html#artiq.coredevice.edge_counter.EdgeCounter.gate_both)? Just running fetch_count for a couple of seconds did not change the behaviour. See these plots:

How do I synchronize the two edge_counters, or at least can get rid of the events from a previous experiment? When using the channels as normal TTLs, I did not see this behaviour.

ARTIQ version: ARTIQ v7.8116.eba143a

    steine01 get rid of the events from a previous experiment?

    core.reset() should clear the RTIO input FIFOs and a fortiori any counts left over from a previous experiment.

    Thank you, that seems to have solved the issue. But is that the recommended way to do it? For the normal TTL drivers it was recommended to me to not use core.reset() to get rid of lingering events, but to catch the RTIOOverflow exception when opening another gate. Is something similiar possible with the artiq.coredevice.edge_counter.CounterOverflow exception?

    • dpn replied to this.

      RTIO input channels should arguably heal the RTIOOverflow. Either by implementing a robust drain() (which you could use blindly at the start of an experiment to get rid of lingering events on that channel, maybe even as part of that devices' init()) or by implicitly draining on RTIOOverflow.

      a month later

      steine01

      You are indeed correct; using Core.reset() would be a bit of a sledgehammer for this use case. Instead, you can call EdgeCounter.fetch_timestamped_count (e.g. with now_mu() as a timeout) in a loop until it returns -1 for the timestamp to signal that there are no more events waiting.

      Of course, this still leaves the question of the root cause for the leftover events. For EdgeCounters, it wouldn't exactly be "lingering gates" where the counts can overflow; rather, exactly one RTIO input event with the count total is generated at the end of every gate window (more precisely, when send_count_event=True is set in the gateware config register). Rather, the issue would be that a prior experiment generates a count readback event (e.g. at the end of a gate), but never actually fetches the corresponding count.

      I've considered for a while whether an RTIO command that just drops all the pending input events, for use at the beginning of a kernel – or even, a config option to do so by default when new kernels are loaded – would be good defensive programming int he sense of helping to avoid confusing issues for the majority of users who aren't interested in seamless handover.