所有 LLVM 层的用途是什么?
What're all the LLVM layers for?
我正在玩 LLVM 3.7
并且想使用新的 ORC 东西。但是我已经做了几个小时了,但仍然不明白每一层的用途,何时使用它们,如何组合它们,或者至少是我需要的最少的东西。
已经阅读了 Kaleidoscope
教程,但这些教程没有解释组成部分是什么,只是说把这个和这个放在这里(加上解析等会分散核心 LLVM 位的注意力)。虽然这很好开始,但它留下了很多空白。在 LLVM 中有很多关于各种事情的文档,但实际上它太多了,几乎让人不知所措。 http://llvm.org/releases/3.7.0/docs/ProgrammersManual.html but I can't find anything that explains how all the pieces fit together. Even more confusing there seems to be multiple APIs for doing the same thing, thinking of the MCJIT
and the newer ORC
API. I saw Lang Hames post 之类的解释,自从他在 link.
中发布的补丁以来,似乎发生了一些变化
那么对于一个特定的问题,所有这些层是如何组合在一起的?
当我以前使用 LLVM 时,我可以很容易地 link 到 C 函数,使用“How to use JIT”示例作为基础,我尝试 linking 外部函数 extern "C" double doIt
但是以 LLVM ERROR: Tried to execute an unknown external function: doIt
.
结尾
查看 this ORC 示例,我似乎需要配置它搜索符号的位置。但是 TBH 虽然我还在摇摆不定,但它主要是猜测工作。这是我得到的:
#include "llvm/ADT/STLExtras.h"
#include "llvm/ExecutionEngine/GenericValue.h"
#include "llvm/ExecutionEngine/Interpreter.h"
#include "llvm/IR/Constants.h"
#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Module.h"
#include "llvm/Support/ManagedStatic.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/Support/raw_ostream.h"
#include "std.hpp"
using namespace llvm;
int main() {
InitializeNativeTarget();
LLVMContext Context;
// Create some module to put our function into it.
std::unique_ptr<Module> Owner = make_unique<Module>("test", Context);
Module *M = Owner.get();
// Create the add1 function entry and insert this entry into module M. The
// function will have a return type of "int" and take an argument of "int".
// The '0' terminates the list of argument types.
Function *Add1F = cast<Function>(M->getOrInsertFunction("add1", Type::getInt32Ty(Context), Type::getInt32Ty(Context), (Type *) 0));
// Add a basic block to the function. As before, it automatically inserts
// because of the last argument.
BasicBlock *BB = BasicBlock::Create(Context, "EntryBlock", Add1F);
// Create a basic block builder with default parameters. The builder will
// automatically append instructions to the basic block `BB'.
IRBuilder<> builder(BB);
// Get pointers to the constant `1'.
Value *One = builder.getInt32(1);
// Get pointers to the integer argument of the add1 function...
assert(Add1F->arg_begin() != Add1F->arg_end()); // Make sure there's an arg
Argument *ArgX = Add1F->arg_begin(); // Get the arg
ArgX->setName("AnArg"); // Give it a nice symbolic name for fun.
// Create the add instruction, inserting it into the end of BB.
Value *Add = builder.CreateAdd(One, ArgX);
// Create the return instruction and add it to the basic block
builder.CreateRet(Add);
// Now, function add1 is ready.
// Now we're going to create function `foo', which returns an int and takes no
// arguments.
Function *FooF = cast<Function>(M->getOrInsertFunction("foo", Type::getInt32Ty(Context), (Type *) 0));
// Add a basic block to the FooF function.
BB = BasicBlock::Create(Context, "EntryBlock", FooF);
// Tell the basic block builder to attach itself to the new basic block
builder.SetInsertPoint(BB);
// Get pointer to the constant `10'.
Value *Ten = builder.getInt32(10);
// Pass Ten to the call to Add1F
CallInst *Add1CallRes = builder.CreateCall(Add1F, Ten);
Add1CallRes->setTailCall(true);
// Create the return instruction and add it to the basic block.
builder.CreateRet(Add1CallRes);
std::vector<Type *> args;
args.push_back(Type::getDoubleTy(getGlobalContext()));
FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()), args, false);
Function *F = Function::Create(FT, Function::ExternalLinkage, "doIt", Owner.get());
// Now we create the JIT.
ExecutionEngine *EE = EngineBuilder(std::move(Owner)).create();
outs() << "We just constructed this LLVM module:\n\n" << *M;
outs() << "\n\nRunning foo: ";
outs().flush();
// Call the `foo' function with no arguments:
std::vector<GenericValue> noargs;
GenericValue gv = EE->runFunction(FooF, noargs);
auto ax = EE->runFunction(F, noargs);
// Import result of execution:
outs() << "Result: " << gv.IntVal << "\n";
outs() << "Result 2: " << ax.IntVal << "\n";
delete EE;
llvm_shutdown();
return 0;
}
doIt
在 std.hpp
中声明。
你的问题很含糊,但也许我能帮上点忙。 This code sample 是一个使用 Orc 构建的简单 JIT - 它有很好的注释,因此应该很容易理解。
简而言之,Orc 建立在 MCJIT 使用的相同构建块之上(MC 用于将 LLVM 模块编译为目标文件,RuntimeDyld
用于运行时的动态链接),但它提供了更大的灵活性层的概念。因此它可以支持诸如 "lazy" JIT 编译之类的东西,MCJIT 不支持。这对 LLVM 社区很重要,因为不久前删除的 "old JIT" 支持这些东西。 Orc JIT 让我们重新获得这些高级 JIT 功能,同时仍然构建在 MC 之上,因此不会重复代码发布逻辑。
为了获得更好的答案,我建议您提出更具体的问题。
我正在玩 LLVM 3.7
并且想使用新的 ORC 东西。但是我已经做了几个小时了,但仍然不明白每一层的用途,何时使用它们,如何组合它们,或者至少是我需要的最少的东西。
已经阅读了 Kaleidoscope
教程,但这些教程没有解释组成部分是什么,只是说把这个和这个放在这里(加上解析等会分散核心 LLVM 位的注意力)。虽然这很好开始,但它留下了很多空白。在 LLVM 中有很多关于各种事情的文档,但实际上它太多了,几乎让人不知所措。 http://llvm.org/releases/3.7.0/docs/ProgrammersManual.html but I can't find anything that explains how all the pieces fit together. Even more confusing there seems to be multiple APIs for doing the same thing, thinking of the MCJIT
and the newer ORC
API. I saw Lang Hames post 之类的解释,自从他在 link.
那么对于一个特定的问题,所有这些层是如何组合在一起的?
当我以前使用 LLVM 时,我可以很容易地 link 到 C 函数,使用“How to use JIT”示例作为基础,我尝试 linking 外部函数 extern "C" double doIt
但是以 LLVM ERROR: Tried to execute an unknown external function: doIt
.
查看 this ORC 示例,我似乎需要配置它搜索符号的位置。但是 TBH 虽然我还在摇摆不定,但它主要是猜测工作。这是我得到的:
#include "llvm/ADT/STLExtras.h"
#include "llvm/ExecutionEngine/GenericValue.h"
#include "llvm/ExecutionEngine/Interpreter.h"
#include "llvm/IR/Constants.h"
#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Module.h"
#include "llvm/Support/ManagedStatic.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/Support/raw_ostream.h"
#include "std.hpp"
using namespace llvm;
int main() {
InitializeNativeTarget();
LLVMContext Context;
// Create some module to put our function into it.
std::unique_ptr<Module> Owner = make_unique<Module>("test", Context);
Module *M = Owner.get();
// Create the add1 function entry and insert this entry into module M. The
// function will have a return type of "int" and take an argument of "int".
// The '0' terminates the list of argument types.
Function *Add1F = cast<Function>(M->getOrInsertFunction("add1", Type::getInt32Ty(Context), Type::getInt32Ty(Context), (Type *) 0));
// Add a basic block to the function. As before, it automatically inserts
// because of the last argument.
BasicBlock *BB = BasicBlock::Create(Context, "EntryBlock", Add1F);
// Create a basic block builder with default parameters. The builder will
// automatically append instructions to the basic block `BB'.
IRBuilder<> builder(BB);
// Get pointers to the constant `1'.
Value *One = builder.getInt32(1);
// Get pointers to the integer argument of the add1 function...
assert(Add1F->arg_begin() != Add1F->arg_end()); // Make sure there's an arg
Argument *ArgX = Add1F->arg_begin(); // Get the arg
ArgX->setName("AnArg"); // Give it a nice symbolic name for fun.
// Create the add instruction, inserting it into the end of BB.
Value *Add = builder.CreateAdd(One, ArgX);
// Create the return instruction and add it to the basic block
builder.CreateRet(Add);
// Now, function add1 is ready.
// Now we're going to create function `foo', which returns an int and takes no
// arguments.
Function *FooF = cast<Function>(M->getOrInsertFunction("foo", Type::getInt32Ty(Context), (Type *) 0));
// Add a basic block to the FooF function.
BB = BasicBlock::Create(Context, "EntryBlock", FooF);
// Tell the basic block builder to attach itself to the new basic block
builder.SetInsertPoint(BB);
// Get pointer to the constant `10'.
Value *Ten = builder.getInt32(10);
// Pass Ten to the call to Add1F
CallInst *Add1CallRes = builder.CreateCall(Add1F, Ten);
Add1CallRes->setTailCall(true);
// Create the return instruction and add it to the basic block.
builder.CreateRet(Add1CallRes);
std::vector<Type *> args;
args.push_back(Type::getDoubleTy(getGlobalContext()));
FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()), args, false);
Function *F = Function::Create(FT, Function::ExternalLinkage, "doIt", Owner.get());
// Now we create the JIT.
ExecutionEngine *EE = EngineBuilder(std::move(Owner)).create();
outs() << "We just constructed this LLVM module:\n\n" << *M;
outs() << "\n\nRunning foo: ";
outs().flush();
// Call the `foo' function with no arguments:
std::vector<GenericValue> noargs;
GenericValue gv = EE->runFunction(FooF, noargs);
auto ax = EE->runFunction(F, noargs);
// Import result of execution:
outs() << "Result: " << gv.IntVal << "\n";
outs() << "Result 2: " << ax.IntVal << "\n";
delete EE;
llvm_shutdown();
return 0;
}
doIt
在 std.hpp
中声明。
你的问题很含糊,但也许我能帮上点忙。 This code sample 是一个使用 Orc 构建的简单 JIT - 它有很好的注释,因此应该很容易理解。
简而言之,Orc 建立在 MCJIT 使用的相同构建块之上(MC 用于将 LLVM 模块编译为目标文件,RuntimeDyld
用于运行时的动态链接),但它提供了更大的灵活性层的概念。因此它可以支持诸如 "lazy" JIT 编译之类的东西,MCJIT 不支持。这对 LLVM 社区很重要,因为不久前删除的 "old JIT" 支持这些东西。 Orc JIT 让我们重新获得这些高级 JIT 功能,同时仍然构建在 MC 之上,因此不会重复代码发布逻辑。
为了获得更好的答案,我建议您提出更具体的问题。