My Project
Public Member Functions | Static Public Member Functions | List of all members
mlir::ExecutionEngine Class Reference

#include <ExecutionEngine.h>

Public Member Functions

 ExecutionEngine (bool enableObjectCache)
 
llvm::Expected< void(*)(void **)> lookup (StringRef name) const
 
template<typename... Args>
llvm::Error invoke (StringRef name, Args &... args)
 
llvm::Error invoke (StringRef name, MutableArrayRef< void *> args)
 
void dumpToObjectFile (StringRef filename)
 Dump object code to output file filename. More...
 

Static Public Member Functions

static llvm::Expected< std::unique_ptr< ExecutionEngine > > create (ModuleOp m, std::function< llvm::Error(llvm::Module *)> transformer={}, Optional< llvm::CodeGenOpt::Level > jitCodeGenOptLevel=llvm::None, ArrayRef< StringRef > sharedLibPaths={}, bool enableObjectCache=false)
 
static bool setupTargetTriple (llvm::Module *llvmModule)
 

Detailed Description

JIT-backed execution engine for MLIR modules. Assumes the module can be converted to LLVM IR. For each function, creates a wrapper function with the fixed interface

void _mlir_funcName(void **)

where the only argument is interpreted as a list of pointers to the actual arguments of the function, followed by a pointer to the result. This allows the engine to provide the caller with a generic function pointer that can be used to invoke the JIT-compiled function.

Constructor & Destructor Documentation

◆ ExecutionEngine()

ExecutionEngine::ExecutionEngine ( bool  enableObjectCache)

Member Function Documentation

◆ create()

Expected< std::unique_ptr< ExecutionEngine > > ExecutionEngine::create ( ModuleOp  m,
std::function< llvm::Error(llvm::Module *)>  transformer = {},
Optional< llvm::CodeGenOpt::Level >  jitCodeGenOptLevel = llvm::None,
ArrayRef< StringRef >  sharedLibPaths = {},
bool  enableObjectCache = false 
)
static

Creates an execution engine for the given module. If transformer is provided, it will be called on the LLVM module during JIT-compilation and can be used, e.g., for reporting or optimization. jitCodeGenOptLevel, when provided, is used as the optimization level for target code generation. If sharedLibPaths are provided, the underlying JIT-compilation will open and link the shared libraries for symbol resolution. If objectCache is provided, JIT compiler will use it to store the object generated for the given module.

◆ dumpToObjectFile()

void ExecutionEngine::dumpToObjectFile ( StringRef  filename)

Dump object code to output file filename.

◆ invoke() [1/2]

template<typename... Args>
llvm::Error mlir::ExecutionEngine::invoke ( StringRef  name,
Args &...  args 
)

Invokes the function with the given name passing it the list of arguments. The arguments are accepted by lvalue-reference since the packed function interface expects a list of non-null pointers.

◆ invoke() [2/2]

Error ExecutionEngine::invoke ( StringRef  name,
MutableArrayRef< void *>  args 
)

Invokes the function with the given name passing it the list of arguments as a list of opaque pointers. This is the arity-agnostic equivalent of the templated invoke.

◆ lookup()

Expected< void(*)(void **)> ExecutionEngine::lookup ( StringRef  name) const

Looks up a packed-argument function with the given name and returns a pointer to it. Propagates errors in case of failure.

◆ setupTargetTriple()

bool ExecutionEngine::setupTargetTriple ( llvm::Module *  llvmModule)
static

Set the target triple on the module. This is implicitly done when creating the engine.


The documentation for this class was generated from the following files: