next up previous contents index
Next: 5. Grid Computing Up: 4. Miscellaneous Concepts Previous: 4.3 Computing On Demand   Contents   Index

Subsections


4.4 Application Program Interfaces


4.4.1 Web Service

Condor's Web Service (WS) API provides a way for application developers to interact with Condor, without needing to utilize Condor's command-line tools. In keeping with the Condor philosophy of reliability and fault-tolerance, this API is designed to provide a simple and powerful way to interact with Condor. Condor daemons understand and implement the SOAP (Simple Object Access Protocol) XML API to provide a web service interface for Condor job submission and management.

To deal with the issues of reliability and fault-tolerance, a two-phase commit mechanism to provides a transaction-based protocol. The following API description describes interaction between a client using the API and the condor_ schedd daemon to illustrate transactions for use in job submission and queue management functions.


4.4.1.1 Transactions

All applications using the API to interact with the condor_ schedd will need to use transactions. A transaction is an ACID unit of work (atomic, consistent, isolated, and durable). The API limits the lifetime of a transaction, and both the client (application) and the server (the condor_ schedd daemon) may place a limit on the lifetime. The server reserves the right to specify a maximum duration for a transaction.

The client initiates a transaction using the beginTransaction() method. It ends the transaction with either a commit (using commitTransaction()) or an abort (using abortTransaction()).

Not all operations in the API need to be performed within a transaction. Some accept a null transaction. A null transaction is a SOAP message with

<transaction xsi:type="ns1:Transaction" xsi:nil="true"/>
Often this is achieved by passing the programming language's equivalent of null in place of a transaction identifier. It is possible that some operations will have access to more information when they are used inside a transaction. For instance, a getJobAds(). query would have access to the jobs that are pending in a transaction, which are not committed and therefore not visible outside of the transaction. Transactions are as ACID compliant as possible. Therefore, do not query for information outside of a transaction on which to make a decision inside a transaction based on the query's results.


4.4.1.2 Job Submission

A ClassAd is required to describe a job. The job ClassAd will be submitted to the condor_ schedd within a transaction using the submit() method. The complexity of job ClassAd creation may be simplified by the createJobTemplate() method. It returns an instance of a ClassAd structure that may be further modified. A necessary part of the job ClassAd are the job attributes ClusterId and ProcId, which uniquely identify the cluster and the job within a cluster. Allocation and assignment of (monotonically increasing) ClusterId values utilize the newCluster() method. Jobs may be submitted within the assigned cluster only until the newCluster() method is invoked a subsequent time. Each job is allocated and assigned a (monotonically increasing) ProcId within the current cluster using the newJob() method. Therefore, the sequence of method calls to submit a set of jobs initially calls newCluster(). This is followed by calls to newJob() and then submit() for each job within the cluster.

As an example, here are sample cluster and job numbers that result from the ordered calls to submission methods:

  1. A call to newCluster(), assigns a ClusterId of 6.
  2. A call to newJob(), assigns a ProcId of 0, as this is the first job within the cluster.
  3. A call to submit() results in a job submission numbered 6.0.
  4. A call to newJob(), assigns a ProcId of 1.
  5. A call to submit() results in a job submission numbered 6.1.
  6. A call to newJob(), assigns a ProcId of 2.
  7. A call to submit() results in a job submission numbered 6.2.
  8. A call to newCluster(), assigns a ClusterId of 7.
  9. A call to newJob(), assigns a ProcId of 0, as this is the first job within the cluster.
  10. A call to submit() results in a job submission numbered 7.0.
  11. A call to newJob(), assigns a ProcId of 1.
  12. A call to submit() results in a job submission numbered 7.1.

There is the potential that a call to submit() will fail. Failure means that the job is in the queue, and it typically indicates that something needed by the job has not been sent. As a result the job has no hope in successfully running. It is possible to recover from such a failure by trying to resend information that the job will need. It is also completely acceptable to abort and make another attempt. To simplify the client's effort in figuring out what the job requires, a discoverJobRequirements() method accepting a job ClassAd and returning a list of things that should be sent along with the job is provided.


4.4.1.3 File Transfer

A common job submission case requires the job's executable and input files to be transferred from the machine where the application is running to the machine where the condor_ schedd daemon is running. This is the analogous situation to running condor_ submit using the -spool or -remote option. The executable and input files must be sent directly to the condor_ schedd daemon, which places all files in a spool location.

The two methods declareFile() and sendFile() work in tandem to transfer files to the condor_ schedd daemon. The declareFile() method causes the condor_ schedd daemon to create the file in its spool location, or indicate in its return value that the file already exists. This increases efficiency, as resending an existing file is a waste of resources. The sendFile() method sends base64 encoded data. sendFile() may be used to send an entire file, or chunks of files as desired.

The declareFile() method has both required and optional arguments. declareFile() requires the name of the file and its size in bytes. The optional arguments relate hash information. A hash type of NOHASH disables file verification; the condor_ schedd daemon will not have a reliable way to determine the existence of the file being declared.

Methods for retrieving files are most useful when a job is completed. Consider the categorization of the typical life-cycle for a job:

Birth:
The birth of a job begins with submit().
Childhood:
The job executes.
Middle Age:
A completed job waits to be removed. As the job enters Middle Age, its JobStatus ClassAd attribute becomes Completed (the value 4).
Old Age:
The job's information goes into the history log.

Once the job enters Middle Age, the getFile() method retrieves a file. The listSpool() method assists by providing a list of all the job's files in the spool location.

The job enters Old Age by the application's use of the closeSpool() method. It causes the condor_ schedd daemon to remove the job from the queue, and the job's spool files are no longer available. As there is no requirement for the application to invoke the closeSpool() method, jobs can potentially remain in the queue forever. The configuration variable SOAP_LEAVE_IN_QUEUE may mitigate this problem. When this boolean variable evaluates to False, a job enters Old Age. A reasonable example for this configuration variable is

SOAP_LEAVE_IN_QUEUE = ((JobStatus==4) && ((ServerTime - CompletionDate) < (60 * 60 * 24)))
This expression results in Old age for a job (removed from the queue), once the job has been Middle Aged (been completed) for 24 hours.


4.4.1.4 Implementation Details

Condor daemons understand and communicate using the SOAP XML protocol. An application seeking to use this protocol will require code that handles the communication. The XML WSDL (Web Services Description Language) that Condor implements is included with the Condor distribution. It is in $(RELEASE_DIR)/lib/webservice. The WSDL must be run through a toolkit to produce language-specific routines that do communication. The application is compiled with these routines.

Condor must be configured to enable responses to SOAP calls. Please see section 3.3.28 for definitions of the configuration variables related to the web services API.

The API's routines can be roughly categorized into ones that deal with

The routines for each of these categories is detailed. Note that the signature provided will accurately reflect a routine's name, but that return values and parameter specification will vary according to the target programming language.


4.4.1.5 Methods for Transaction Management

beginTransaction
A prototype is

StatusAndTransaction beginTransaction(int duration);

Begin a transaction.

commitTransaction
A prototype is

Status commitTransaction(Transaction transaction);

Commits a transaction.

abortTransaction
A prototype is

Status abortTransaction(Transaction transaction);

Abort a transaction.

extendTransaction
A prototype is

StatusAndTransaction extendTransaction(Transaction transaction, int duration);

Request an extension in duration for a specific transaction.


4.4.1.6 Methods for Job Submission

submit
A prototype is

Status submit(Transaction transaction, int clusterId, int jobId, ClassAd jobAd);

Submit a job.

createJobTemplate
A prototype is

StatusAndClassAd createJobTemplate(int clusterId, int jobId, String owner, UniverseType type, String command, String arguments, String requirements);

Request a job Class Ad, given some of the job requirements. This job Class Ad will be suitable for use when submitting the job.


4.4.1.7 Methods for File Transfer

declareFile
A prototype is

Status declareFile(Transaction transaction, int clusterId, int jobId, String name, int size, HashType hashType, String hash);

Declare a file that may be used by a job.

sendFile
A prototype is

Status sendFile(Transaction transaction, int clusterId, int jobId, String name, int offset, Base64 data);

Send a file that a job may use.

getFile
A prototype is

StatusAndBase64 getFile(Transaction transaction, int clusterId, int jobId, String name, int offset, int length);

Get a file from a job's spool. Does not need to occur in a transaction.

closeSpool
A prototype is

Status closeSpool(Transaction transaction, int clusterId, int jobId);

Close a job's spool. Does not need to occur in a transaction. All the files in the job's spool can be deleted.

listSpool
A prototype is

FileInfoArrayAndStatus listSpool(Transaction transaction, int clusterId, int jobId);

List the files in a job's spool. Does not need to occur in a transaction.


4.4.1.8 Methods for Job Management

newCluster
A prototype is

StatusAndInteger newCluster(Transaction transaction);

Create a new job cluster.

removeCluster
A prototype is

Status removeCluster(Transaction transaction, int clusterId, String reason);

Remove a job cluster, and all the jobs within it. Does not need to occur in a transaction.

newJob
A prototype is

StatusAndInteger newJob(Transaction transaction, int clusterId);

Creates a new job within the most recently created job cluster.

removeJob
A prototype is

Status removeJob(Transaction transaction, int clusterId, int jobId, String reason, boolean forceRemoval);

Remove a job, regardless of the job's state. Does not need to occur in a transaction.

holdJob
A prototype is

Status holdJob();

Put a job into the Hold state, regardless of the job's current state. Does not need to occur in a transaction.

releaseJob
A prototype is

Status releaseJob(Transaction transaction, int clusterId, int jobId, String reason, boolean emailUser, boolean emailAdmin);

Release a job that has been in the Hold state. Does not need to occur in a transaction.

getJobAds
A prototype is

StatusAndClassAdArray getJobAds(Transaction transaction, String constraint);

Find all the job ClassAds matching the given constraint. Does not need to occur in a transaction.

getJobAd
A prototype is

StatusAndClassAd getJobAd(Transaction transaction, int clusterId, int jobId);

Find a specific job ClassAd. This method does much the same as the first element from the array returned by

getJobAds(transaction, "(ClusterId==clusterId && JobId==jobId)")

requestReschedule
A prototype is

Status requestReschedule();

Request a condor_ reschedule from the condor_ schedd daemon.


4.4.2 The DRMAA API

The following quote from the DRMAA Specification 1.0 abstract nicely describes the purpose of the API:

The Distributed Resource Management Application API (DRMAA), developed by a working group of the Global Grid Forum (GGF),

provides a generalized API to distributed resource management systems (DRMSs) in order to facilitate integration of application programs. The scope of DRMAA is limited to job submission, job monitoring and control, and the retrieval of the finished job status. DRMAA provides application developers and distributed resource management builders with a programming model that enables the development of distributed applications tightly coupled to an underlying DRMS. For deployers of such distributed applications, DRMAA preserves flexibility and choice in system design.

The API allows users who write programs using DRMAA functions and link to a DRMAA library to submit, control, and retrieve information about jobs to a Grid system. The Condor implementation of a portion of the API allows programs (applications) to use the library functions provided to submit, monitor and control Condor jobs.

See the DRMAA site (http://www.drmaa.org) to find the API specification for DRMA 1.0 for further details on the API.


4.4.2.1 Implementation Details

The library was developed from the DRMA API Specification 1.0 of January 2004 and the DRMAA C Bindings v0.9 of September 2003. It is a static C library that expects a POSIX thread model on Unix systems and a Windows thread model on Windows systems. Unix systems that do not support POSIX threads are not guaranteed thread safety when calling the library's functions.

The object library file is called libcondordrmaa.a, and it is located within the <release>/lib directory in the Condor download. Its header file is called lib_condor_drmaa.h, and it is located within the <release>/include directory in the Condor download. Also within <release>/include is the file lib_condor_drmaa.README, which gives further details on the implementation.

Use of the library requires that a local condor_ schedd daemon must be running, and the program linked to the library must have sufficient spool space. This space should be in /tmp or specified by the environment variables TEMP, TMP, or SPOOL. The program linked to the library and the local condor_ schedd daemon must have read, write, and traverse rights to the spool space.

The library currently supports the following specification-defined job attributes:

DRMAA_REMOTE_COMMAND
DRMAA_JS_STATE
DRMAA_NATIVE_SPECIFICATION
DRMAA_BLOCK_EMAIL
DRMAA_INPUT_PATH
DRMAA_OUTPUT_PATH
DRMAA_ERROR_PATH
DRMAA_V_ARGV
DRMAA_V_ENV
DRMAA_V_EMAIL

The attribute DRMAA_NATIVE_SPECIFICATION can be used to direct all commands supported within submit description files. See the condor_ submit manual page at section 9 for a complete list. Multiple commands can be specified if separated by newlines.

As in the normal submit file, arbitrary attributes can be added to the job's ClassAd by prefixing the attribute with +. In this case, you will need to put string values in quotation marks, the same as in a submit file.

Thus to tell Condor that the job will likely use 64 megabytes of memory (65536 kilobytes), to more highly rank machines with more memory, and to add the arbitrary attribute of department set to chemistry, you would set AttrDRMAA_NATIVE_SPECIFICATION to the C string:

  drmaa_set_attribute(jobtemplate, DRMAA_NATIVE_SPECIFICATION,
      "image_size=65536\nrank=Memory\n+department=\"chemistry\"",
      err_buf, sizeof(err_buf)-1);


4.4.3 The Command Line Interface

\fbox{This section has not yet been written}


4.4.4 The Condor GAHP

\fbox{This section has not yet been written}


4.4.5 The Condor Perl Module

The Condor Perl module facilitates automatic submitting and monitoring of Condor jobs, along with automated administration of Condor. The most common use of this module is the monitoring of Condor jobs. The Condor Perl module can be used as a meta scheduler for the submission of Condor jobs.

The Condor Perl module provides several subroutines. Some of the subroutines are used as callbacks; an event triggers the execution of a specific subroutine. Other of the subroutines denote actions to be taken by Perl. Some of these subroutines take other subroutines as arguments.

4.4.5.1 Subroutines

Submit(submit_description_file)
This subroutine takes the action of submitting a job to Condor. The argument is the name of a submit description file. The condor_ submit program should be in the path of the user. If the user wishes to monitor the job with condor they must specify a log file in the command file. The cluster submitted is returned. For more information see the condor_ submit man page.

Vacate(machine)
This subroutine takes the action of sending a condor_ vacate command to the machine specified as an argument. The machine may be specified either by host name, or by sinful string. For more information see the condor_ vacate man page.

Reschedule(machine)
This subroutine takes the action of sending a condor_ reschedule command to the machine specified as an argument. The machine may be specified either by host name, or by sinful string. For more information see the condor_ reschedule man page.

Monitor(cluster)
Takes the action of monitoring this cluster. It returns when all jobs in cluster terminate.

Wait()
Takes the action of waiting until all monitor subroutines finish, and then exits the Perl script.

DebugOn()
Takes the action of turning debug messages on. This may be useful when attempting to debug the Perl script.

DebugOff()
Takes the action of turning debug messages off.

RegisterEvicted(sub)
Register a subroutine (called sub) to be used as a callback when a job from a specified cluster is evicted. The subroutine will be called with two arguments: cluster and job. The cluster and job are the cluster number and process number of the job that was evicted.

RegisterEvictedWithCheckpoint(sub)
Same as RegisterEvicted except that the handler is called when the evicted job was checkpointed.

RegisterEvictedWithoutCheckpoint(sub)
Same as RegisterEvicted except that the handler is called when the evicted job was not checkpointed.

RegisterExit(sub)
Register a termination handler that is called when a job exits. The termination handler will be called with two arguments: cluster and job. The cluster and job are the cluster and process numbers of the existing job.

RegisterExitSuccess(sub)
Register a termination handler that is called when a job exits without errors. The termination handler will be called with two arguments: cluster and job The cluster and job are the cluster and process numbers of the existing job.

RegisterExitFailure(sub)
Register a termination handler that is called when a job exits with errors. The termination handler will be called with three arguments: cluster, job and retval. The cluster and job are the cluster and process numbers of the existing job and the retval is the exit code of the job.

RegisterExitAbnormal(sub)
Register an termination handler that is called when a job abnormally exits (segmentation fault, bus error, ...). The termination handler will be called with four arguments: cluster, job signal and core. The cluster and job are the cluster and process numbers of the existing job. The signal indicates the signal that the job died with and core indicates whether a core file was created and if so, what the full path to the core file is.

RegisterAbort(sub)
Register a handler that is called when a job is aborted by a user.

RegisterJobErr(sub)
Register a handler that is called when a job is not executable.

RegisterExecute(sub)
Register an execution handler that is called whenever a job starts running on a given host. The handler is called with four arguments: cluster, job host, and sinful. Cluster and job are the cluster and process numbers for the job, host is the Internet address of the machine running the job, and sinful is the Internet address and command port of the condor_ starter supervising the job.

RegisterSubmit(sub)
Register a submit handler that is called whenever a job is submitted with the given cluster. The handler is called with cluster, job host, and sinful. Cluster and job are the cluster and process numbers for the job, host is the Internet address of the machine running the job, and sinful is the Internet address and command port of the condor_ schedd responsible for the job.

Monitor(cluster)
Begin monitoring this cluster. Returns when all jobs in cluster terminate.

Wait()
Wait until all monitors finish and exit.

DebugOn()
Turn debug messages on. This may be useful if you don't understand what your script is doing.

DebugOff()
Turn debug messages off.


4.4.5.2 Examples

The following is an example that uses the Condor Perl module. The example uses the submit description file mycmdfile.cmd to specify the submission of a job. As the job is matched with a machine and begins to execute, a callback subroutine (called execute) sends a condor_ vacate signal to the job, and it increments a counter which keeps track of the number of times this callback executes. A second callback keeps a count of the number of times that the job was evicted before the job completes. After the job completes, the termination callback (called normal) prints out a summary of what happened.

#!/usr/bin/perl
use Condor;

$CMD_FILE = 'mycmdfile.cmd';
$evicts = 0;
$vacates = 0;

# A subroutine that will be used as the normal execution callback
$normal = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "Job $cluster.$job exited normally without errors.\n";
    print "Job was vacated $vacates times and evicted $evicts times\n";
    exit(0);
};	

$evicted = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "Job $cluster, $job was evicted.\n";
    $evicts++;
    &Condor::Reschedule();	
};

$execute = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};
    $host = $parameters{'host'};
    $sinful = $parameters{'sinful'};

    print "Job running on $sinful, vacating...\n";
    &Condor::Vacate($sinful);
    $vacates++;
};

$cluster = Condor::Submit($CMD_FILE);
printf("Could not open. Access Denied\n");
			break;
&Condor::RegisterExitSuccess($normal);
&Condor::RegisterEvicted($evicted);
&Condor::RegisterExecute($execute);
&Condor::Monitor($cluster);
&Condor::Wait();

This example program will submit the command file 'mycmdfile.cmd' and attempt to vacate any machine that the job runs on. The termination handler then prints out a summary of what has happened.

A second example Perl script facilitates the metascheduling of two of Condor jobs. It submits a second job if the first job successfully completes.

#!/s/std/bin/perl

# tell Perl where to find the Condor library
use lib '/unsup/condor/lib';
# tell Perl to use what it finds in the Condor library
use Condor;

$SUBMIT_FILE1 = 'Asubmit.cmd';
$SUBMIT_FILE2 = 'Bsubmit.cmd';

# Callback used when first job exits without errors.
$firstOK = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    $cluster = Condor::Submit($SUBMIT_FILE2);
    if (($cluster) == 0)
    {
        printf("Could not open $SUBMIT_FILE2.\n");
    }

    &Condor::RegisterExitSuccess($secondOK);
    &Condor::RegisterExitFailure($secondfails);
    &Condor::Monitor($cluster);
};	

$firstfails = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The first job, $cluster.$job failed, exiting with an error. \n";
    exit(0);
};	

# Callback used when second job exits without errors.
$secondOK = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The second job, $cluster.$job successfully completed. \n";
    exit(0);
};	

# Callback used when second job exits WITH an error.
$secondfails = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The second job ($cluster.$job) failed. \n";
    exit(0);
};	


$cluster = Condor::Submit($SUBMIT_FILE1);
if (($cluster) == 0)
{
    printf("Could not open $SUBMIT_FILE1. \n");
}
&Condor::RegisterExitSuccess($firstOK);
&Condor::RegisterExitFailure($firstfails);


&Condor::Monitor($cluster);
&Condor::Wait();

Some notes are in order about this example. The same task could be accomplished using the Condor DAGMan metascheduler. The first job is the parent, and the second job is the child. The input file to DAGMan is significantly simpler than this Perl script.

A third example using the Condor Perl module expands upon the second example. Whereas the second example could have been more easily implemented using DAGMan, this third example shows the versatility of using Perl as a metascheduler.

In this example, the result generated from the successful completion of the first job are used to decide which subsequent job should be submitted. This is a very simple example of a branch and bound technique, to focus the search for a problem solution.

#!/s/std/bin/perl

# tell Perl where to find the Condor library
use lib '/unsup/condor/lib';
# tell Perl to use what it finds in the Condor library
use Condor;

$SUBMIT_FILE1 = 'Asubmit.cmd';
$SUBMIT_FILE2 = 'Bsubmit.cmd';
$SUBMIT_FILE3 = 'Csubmit.cmd';

# Callback used when first job exits without errors.
$firstOK = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    # open output file from first job, and read the result
    if ( -f "A.output" )
    {
        open(RESULTFILE, "A.output") or die "Could not open result file.";
        $result = <RESULTFILE>;
        close(RESULTFILE);
        # next job to submit is based on output from first job
        if ($result < 100)
        {
            $cluster = Condor::Submit($SUBMIT_FILE2);
            if (($cluster) == 0)
            {
                printf("Could not open $SUBMIT_FILE2.\n");
            }

            &Condor::RegisterExitSuccess($secondOK);
            &Condor::RegisterExitFailure($secondfails);
            &Condor::Monitor($cluster);
        }
        else
        {
            $cluster = Condor::Submit($SUBMIT_FILE3);
            if (($cluster) == 0)
            {
                printf("Could not open $SUBMIT_FILE3.\n");
            }

            &Condor::RegisterExitSuccess($thirdOK);
            &Condor::RegisterExitFailure($thirdfails);
            &Condor::Monitor($cluster);
        }
    }
    else
    {
        
        printf("Results file does not exist.\n");
    }
};	

$firstfails = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The first job, $cluster.$job failed, exiting with an error. \n";
    exit(0);
};	


# Callback used when second job exits without errors.
$secondOK = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The second job, $cluster.$job successfully completed. \n";
    exit(0);
};	


# Callback used when third job exits without errors.
$thirdOK = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The third job, $cluster.$job successfully completed. \n";
    exit(0);
};	


# Callback used when second job exits WITH an error.
$secondfails = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The second job ($cluster.$job) failed. \n";
    exit(0);
};	

# Callback used when third job exits WITH an error.
$thirdfails = sub
{
    %parameters = @_;
    $cluster = $parameters{'cluster'};
    $job = $parameters{'job'};

    print "The third job ($cluster.$job) failed. \n";
    exit(0);
};	


$cluster = Condor::Submit($SUBMIT_FILE1);
if (($cluster) == 0)
{
    printf("Could not open $SUBMIT_FILE1. \n");
}
&Condor::RegisterExitSuccess($firstOK);
&Condor::RegisterExitFailure($firstfails);


&Condor::Monitor($cluster);
&Condor::Wait();


next up previous contents index
Next: 5. Grid Computing Up: 4. Miscellaneous Concepts Previous: 4.3 Computing On Demand   Contents   Index
condor-admin@cs.wisc.edu