From 2916489c54a303971bc546f287b71532d49ddd33 Mon Sep 17 00:00:00 2001 From: kristina Date: Sat, 16 Nov 2019 20:58:08 +0000 Subject: [PATCH] [Docs] Fix relative links in tutorial. Update relative links in Kaleidoscope tutorial. --- .../MyFirstLanguageFrontend/LangImpl03.rst | 18 +++++++++--------- .../MyFirstLanguageFrontend/LangImpl04.rst | 8 ++++---- .../MyFirstLanguageFrontend/LangImpl05.rst | 2 +- .../MyFirstLanguageFrontend/LangImpl07.rst | 8 ++++---- .../MyFirstLanguageFrontend/LangImpl10.rst | 8 ++++---- 5 files changed, 22 insertions(+), 22 deletions(-) diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl03.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl03.rst index 7b4e24d35e4020..5364b172ad91bf 100644 --- a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl03.rst +++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl03.rst @@ -198,22 +198,22 @@ automatically provide each one with an increasing, unique numeric suffix. Local value names for instructions are purely optional, but it makes it much easier to read the IR dumps. -`LLVM instructions <../LangRef.html#instruction-reference>`_ are constrained by strict +`LLVM instructions <../../LangRef.html#instruction-reference>`_ are constrained by strict rules: for example, the Left and Right operators of an `add -instruction <../LangRef.html#add-instruction>`_ must have the same type, and the +instruction <../../LangRef.html#add-instruction>`_ must have the same type, and the result type of the add must match the operand types. Because all values in Kaleidoscope are doubles, this makes for very simple code for add, sub and mul. On the other hand, LLVM specifies that the `fcmp -instruction <../LangRef.html#fcmp-instruction>`_ always returns an 'i1' value (a +instruction <../../LangRef.html#fcmp-instruction>`_ always returns an 'i1' value (a one bit integer). The problem with this is that Kaleidoscope wants the value to be a 0.0 or 1.0 value. In order to get these semantics, we combine the fcmp instruction with a `uitofp -instruction <../LangRef.html#uitofp-to-instruction>`_. This instruction converts its +instruction <../../LangRef.html#uitofp-to-instruction>`_. This instruction converts its input integer into a floating point value by treating the input as an unsigned value. In contrast, if we used the `sitofp -instruction <../LangRef.html#sitofp-to-instruction>`_, the Kaleidoscope '<' operator +instruction <../../LangRef.html#sitofp-to-instruction>`_, the Kaleidoscope '<' operator would return 0.0 and -1.0, depending on the input value. .. code-block:: c++ @@ -246,14 +246,14 @@ can use the LLVM symbol table to resolve function names for us. Once we have the function to call, we recursively codegen each argument that is to be passed in, and create an LLVM `call -instruction <../LangRef.html#call-instruction>`_. Note that LLVM uses the native C +instruction <../../LangRef.html#call-instruction>`_. Note that LLVM uses the native C calling conventions by default, allowing these calls to also call into standard library functions like "sin" and "cos", with no additional effort. This wraps up our handling of the four basic expressions that we have so far in Kaleidoscope. Feel free to go in and add some more. For example, -by browsing the `LLVM language reference <../LangRef.html>`_ you'll find +by browsing the `LLVM language reference <../../LangRef.html>`_ you'll find several other interesting instructions that are really easy to plug into our basic framework. @@ -297,7 +297,7 @@ are, so you don't "new" a type, you "get" it. The final line above actually creates the IR Function corresponding to the Prototype. This indicates the type, linkage and name to use, as well as which module to insert into. "`external -linkage <../LangRef.html#linkage>`_" means that the function may be +linkage <../../LangRef.html#linkage>`_" means that the function may be defined outside the current module and/or that it is callable by functions outside the module. The Name passed in is the name the user specified: since "``TheModule``" is specified, this name is registered @@ -385,7 +385,7 @@ Once the insertion point has been set up and the NamedValues map populated, we call the ``codegen()`` method for the root expression of the function. If no error happens, this emits code to compute the expression into the entry block and returns the value that was computed. Assuming no error, we then create an -LLVM `ret instruction <../LangRef.html#ret-instruction>`_, which completes the function. +LLVM `ret instruction <../../LangRef.html#ret-instruction>`_, which completes the function. Once the function is built, we call ``verifyFunction``, which is provided by LLVM. This function does a variety of consistency checks on the generated code, to determine if our compiler is doing everything diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst index bf4e2398d28a4c..b643ae583c3694 100644 --- a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst +++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst @@ -117,8 +117,8 @@ but if run at link time, this can be a substantial portion of the whole program). It also supports and includes "per-function" passes which just operate on a single function at a time, without looking at other functions. For more information on passes and how they are run, see the -`How to Write a Pass <../WritingAnLLVMPass.html>`_ document and the -`List of LLVM Passes <../Passes.html>`_. +`How to Write a Pass <../../WritingAnLLVMPass.html>`_ document and the +`List of LLVM Passes <../../Passes.html>`_. For Kaleidoscope, we are currently generating functions on the fly, one at a time, as the user types them in. We aren't shooting for the @@ -130,7 +130,7 @@ exactly the code we have now, except that we would defer running the optimizer until the entire file has been parsed. In order to get per-function optimizations going, we need to set up a -`FunctionPassManager <../WritingAnLLVMPass.html#what-passmanager-doesr>`_ to hold +`FunctionPassManager <../../WritingAnLLVMPass.html#what-passmanager-doesr>`_ to hold and organize the LLVM optimizations that we want to run. Once we have that, we can add a set of optimizations to run. We'll need a new FunctionPassManager for each module that we want to optimize, so we'll @@ -207,7 +207,7 @@ point add instruction from every execution of this function. LLVM provides a wide variety of optimizations that can be used in certain circumstances. Some `documentation about the various -passes <../Passes.html>`_ is available, but it isn't very complete. +passes <../../Passes.html>`_ is available, but it isn't very complete. Another good source of ideas can come from looking at the passes that ``Clang`` runs to get started. The "``opt``" tool allows you to experiment with passes from the command line, so you can see if they do diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl05.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl05.rst index 0e61c07659de9e..725423f2d3892a 100644 --- a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl05.rst +++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl05.rst @@ -217,7 +217,7 @@ Kaleidoscope looks like this: To visualize the control flow graph, you can use a nifty feature of the LLVM '`opt `_' tool. If you put this LLVM IR into "t.ll" and run "``llvm-as < t.ll | opt -analyze -view-cfg``", `a -window will pop up <../ProgrammersManual.html#viewing-graphs-while-debugging-code>`_ and you'll +window will pop up <../../ProgrammersManual.html#viewing-graphs-while-debugging-code>`_ and you'll see this graph: .. figure:: LangImpl05-cfg.png diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst index 31e2ffb16907d9..14501fdf643e0b 100644 --- a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst +++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst @@ -105,7 +105,7 @@ direct accesses to G and H: they are not renamed or versioned. This differs from some other compiler systems, which do try to version memory objects. In LLVM, instead of encoding dataflow analysis of memory into the LLVM IR, it is handled with `Analysis -Passes <../WritingAnLLVMPass.html>`_ which are computed on demand. +Passes <../../WritingAnLLVMPass.html>`_ which are computed on demand. With this in mind, the high-level idea is that we want to make a stack variable (which lives in memory, because it is on the stack) for each @@ -120,7 +120,7 @@ that @G defines *space* for an i32 in the global data area, but its *name* actually refers to the address for that space. Stack variables work the same way, except that instead of being declared with global variable definitions, they are declared with the `LLVM alloca -instruction <../LangRef.html#alloca-instruction>`_: +instruction <../../LangRef.html#alloca-instruction>`_: .. code-block:: llvm @@ -223,7 +223,7 @@ variables in certain circumstances: funny pointer arithmetic is involved, the alloca will not be promoted. #. mem2reg only works on allocas of `first - class <../LangRef.html#first-class-types>`_ values (such as pointers, + class <../../LangRef.html#first-class-types>`_ values (such as pointers, scalars and vectors), and only if the array size of the allocation is 1 (or missing in the .ll file). mem2reg is not capable of promoting structs or arrays to registers. Note that the "sroa" pass is @@ -249,7 +249,7 @@ is: variables that only have one assignment point, good heuristics to avoid insertion of unneeded phi nodes, etc. - Needed for debug info generation: `Debug information in - LLVM <../SourceLevelDebugging.html>`_ relies on having the address of + LLVM <../../SourceLevelDebugging.html>`_ relies on having the address of the variable exposed so that debug info can be attached to it. This technique dovetails very naturally with this style of debug info. diff --git a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl10.rst b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl10.rst index 789042b53d3a6a..6d8a131509e01e 100644 --- a/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl10.rst +++ b/llvm/docs/tutorial/MyFirstLanguageFrontend/LangImpl10.rst @@ -51,9 +51,9 @@ For example, try adding: extending the type system in all sorts of interesting ways. Simple arrays are very easy and are quite useful for many different applications. Adding them is mostly an exercise in learning how the - LLVM `getelementptr <../LangRef.html#getelementptr-instruction>`_ instruction + LLVM `getelementptr <../../LangRef.html#getelementptr-instruction>`_ instruction works: it is so nifty/unconventional, it `has its own - FAQ <../GetElementPtr.html>`_! + FAQ <../../GetElementPtr.html>`_! - **standard runtime** - Our current language allows the user to access arbitrary external functions, and we use it for things like "printd" and "putchard". As you extend the language to add higher-level @@ -66,10 +66,10 @@ For example, try adding: memory, either with calls to the standard libc malloc/free interface or with a garbage collector. If you would like to use garbage collection, note that LLVM fully supports `Accurate Garbage - Collection <../GarbageCollection.html>`_ including algorithms that + Collection <../../GarbageCollection.html>`_ including algorithms that move objects and need to scan/update the stack. - **exception handling support** - LLVM supports generation of `zero - cost exceptions <../ExceptionHandling.html>`_ which interoperate with + cost exceptions <../../ExceptionHandling.html>`_ which interoperate with code compiled in other languages. You could also generate code by implicitly making every function return an error value and checking it. You could also make explicit use of setjmp/longjmp. There are