Hi, this is very embarrassing. I understand that there are many posts out there (like this one) with similar issues, but I find it hard to wrap my head around what needs to be done for my situation. This is a long post to explain my situation, so I shall bold my main questions to highlight them.
I have a digital delay generator (DDG) that gives out five TTL output channels with well defined phases between them. These pulses fire off at 10 Hz. We use four of these channels to trigger the flashlamps and Q-switches of our pulsed YAG lasers, and to keep the temporal separation of all the triggered laser pulses well defined. The fifth TTL channel signals t=0 for each new cycle in the 10 Hz cycle. I wish to synchronize Kasli (cu3) with the DDG, so that all the TTLs that my Kasli sends out are synchronized to my laser pulses triggered by the DDG.
I could, in principle, use Kasli to trigger the DDG, but each time the DDG resets its phase, the lasers take a few seconds to warm up to the new timings, so it is preferable to use the DDG to trigger Kasli, and have the DDG going continuously at 10 Hz. The lasers will keep firing at 10 Hz, and we will not use most of these pulses, but this is how the pulsed YAG lasers like to work.
I feel that I must be missing something very trivial, but is there a function that blocks in run()
and releases the block immediately after it receives a rising edge in a TTL in?
I talked to another ARTIQ user and I obtained (and modified minimally) this code from them:
class MissingTrigger(Exception):
pass
class ExternalTrigger(HasEnvironment):
def build(self, trigger=None, t_timeout = 100*ms):
self.setattr_device("core")
self.trigger = trigger
self.t_timeout = t_timeout
def prepare(self):
self.t_timeout_mu = self.core.seconds_to_mu(self.t_timeout)
self.t_buffer_mu = self.core.seconds_to_mu(20*us)
@kernel
def wait_for_trigger(self):
t_gate_open = now_mu()
self.trigger._set_sensitivity(1)
# Loop until all old (before current gate open) events are consumed, or
# there is a timeout
t_trig_mu = 0
while True:
# Wait for a trigger event for up to t_timeout_mu before returning
t_trig_mu = rtio_input_timestamp(now_mu() + self.t_timeout_mu, self.trigger.channel)
# If event if a timeout in the current gate period
if t_trig_mu < 0 or t_trig_mu >= t_gate_open:
break
t_wall = self.core.get_rtio_counter_mu()
at_mu(t_wall + self.t_buffer_mu)
self.trigger._set_sensitivity(0)
if t_trig_mu < 0:
raise MissingTrigger()
return t_trig_mu
class TTLin_block(EnvExperiment):
def build(self):
self.setattr_device("core")
# Get all TTL out
self.ttls = [ self.get_device("ttl"+str(i)) for i in range(4,64) ]
self.start = self.get_device("ttl0")
def prepare(self):
self.lt = ExternalTrigger(self, self.start)
self.lt.prepare()
@kernel
def run(self): # works, but 300 ns jitter
self.core.reset()
self.lt.wait_for_trigger()
self.ttls[0].pulse(5*ms)
and I use my oscilloscope to look at the t=0 signal and the TTL pulse output from my ARTIQ crate, and I see a jitter in the temporal separation between these two pulses with a standard deviation of about 300 ns. The above code implements a blocking function with wait_for_trigger()
, which is what I am looking for, but I am hoping to be able to push the jitter down to or below 50 ns. There is a >20 µs delay between the two pulses because of the self.t_buffer_mu
, but I can easily handle this. Is there a more "RTIO-efficient" way to do this such that the timing jitter is reduced?