I am trying to use an experiment to submit other experiments, and have read the other posts related to this. I am using the script below to run two files placed in the repository which will result in DDS outputs equalling 50 Mhz and and 100 Mhz respectively. I believe I have called the scheduler correctly, set the expids for each experiment and submitted them properly however I am not seeing any changes in the DDS output. Could someone please advise me.

`


from artiq.experiment import *
from artiq.coredevice import ad9910
   
import numpy as np
from artiq.master.worker_db import DeviceManager

from artiq.master.databases import DeviceDB
from artiq.master.scheduler import Scheduler

#ddb = DeviceDB("device_db.py")
#devmgr = DeviceManager(ddb)

#create a scheduler 

class Scheduling_experiments(EnvExperiment):


def build(self):
    self.setattr_device("core") 
    self.setattr_device("scheduler")
    
    

def run(self):
    expid_1 = {
        "file": "repository/urukul_single_tone_1.py",
        "class_name": "Urukul_Frequency_Pulse",
        # "arguements": {"freq = 100*MHz"}
        # "log_level": self.scheduler.expid["l"],
        # "repo_rev": self.scheduler.expid["repo_rev"],
        }
    
    expid_2 = {
        "file": "repository/urukul_single_tone_2.py",
        "class_name": "Urukul_Frequency_Pulse_2",
        # "arguements": {"freq = 500*MHz"}
        # "log_level": self.scheduler.expid["log_level"],
        # "repo_rev": self.scheduler.expid["repo_rev"],
        }

    self.scheduler.submit("main", expid_1)
    # delay(10*s)
    #self._scheduler.submit("main", expid_2)
    `

Why are you looking at the DDS? Just check the scheduler view in artiq_dashboard or artiq_client, and also check the master log for errors.

    sb10q Hi Seb, thanks for the reply. Yes you are right I should have checked this first. When running this script there are no results in "artiq_client show schedule" command. Likewise the only result in the master log is "new connection from "my IP address".

    Submitting from the artiq_dashboard does of course show submissions in the "artiq_client show schedule" command and likewise in the master log I get the following messages:
    setting RTIO
    no connection, starting idle kernel
    no idle kernel found
    new connection from 10.34.101:53798

    Could you or anyone else please advise me further?

    At a guess, your expid looks malformatted. Here's an example that I've used before (not using a git-versioned repository).
    Approx file structure, run with artiq_master -r repository:

    repository/
        my_experiment.py

    Outside of an ARTIQ experiment, this is how you would submit an experiment:

    import sipyco.pc_rpc as rpc
    expid = {
            "class_name": "MyClass",
            "file": "my_experiment.py",
            "arguments": {"arg_name": 5},
            "log_level": 10,
            "repo_rev": "N/A",
    }
    scheduler = rpc.Client(master_ip_address, master_port, "master_schedule")
    scheduler.submit(pipeline_name="main", expid=expid, priority=0, due_date=None, flush=False)

    A similar call, replacing rpc.Client with self.get_device("scheduler") should work in an ARTIQ experiment.
    Here's the signature of scheduler.submit(): https://github.com/m-labs/artiq/blob/9dee8bb9c90eb7a2dcd5a1549dd7d90807314630/artiq/master/scheduler.py#L411-L416

    a year later

    @han94ros Did you ever find the solution to this problem? I'm currently trying to submit a scheduled experiment that uses the DDSs and run into underflows in the DDS coredevice methods. If instead I run the scheduled experiment independently from the GUI, there are no underflows. I've tried quite a few permutations of the expid and none have changed the underflow.

      11 days later

      jfniedermeyer I did solve it in the end by recording all the output signals and replaying them in real time using the direct memory access. I can answer more questions and show you my code using teams/zoom in the week commencing 5th September if you still need help.

        a month later

        han1994ros

        Sorry I didn't see this sooner! We ended up solving it by first pre-computing everything in machine units on the host and then eventually upgrading from a KC705 to a ZC706 FPGA, which allows for the fast floating point operations. Thanks for the offer!