========================
LLVM Programmer's Manual
========================
.. contents::
:local:
.. warning::
This is always a work in progress.
.. _introduction:
Introduction
============
This document is meant to highlight some of the important classes and interfaces
available in the LLVM source-base. This manual is not intended to explain what
LLVM is, how it works, and what LLVM code looks like. It assumes that you know
the basics of LLVM and are interested in writing transformations or otherwise
analyzing or manipulating the code.
This document should get you oriented so that you can find your way in the
continuously growing source code that makes up the LLVM infrastructure. Note
that this manual is not intended to serve as a replacement for reading the
source code, so if you think there should be a method in one of these classes to
do something, but it's not listed, check the source. Links to the `doxygen
`__ sources are provided to make this as easy as
possible.
The first section of this document describes general information that is useful
to know when working in the LLVM infrastructure, and the second describes the
Core LLVM classes. In the future this manual will be extended with information
describing how to use extension libraries, such as dominator information, CFG
traversal routines, and useful utilities like the ``InstVisitor`` (`doxygen
`__) template.
.. _general:
General Information
===================
This section contains general information that is useful if you are working in
the LLVM source-base, but that isn't specific to any particular API.
.. _stl:
The C++ Standard Template Library
---------------------------------
LLVM makes heavy use of the C++ Standard Template Library (STL), perhaps much
more than you are used to, or have seen before. Because of this, you might want
to do a little background reading in the techniques used and capabilities of the
library. There are many good pages that discuss the STL, and several books on
the subject that you can get, so it will not be discussed in this document.
Here are some useful links:
#. `cppreference.com
`_ - an excellent
reference for the STL and other parts of the standard C++ library.
#. `C++ In a Nutshell `_ - This is an O'Reilly
book in the making. It has a decent Standard Library Reference that rivals
Dinkumware's, and is unfortunately no longer free since the book has been
published.
#. `C++ Frequently Asked Questions `_.
#. `SGI's STL Programmer's Guide `_ - Contains a
useful `Introduction to the STL
`_.
#. `Bjarne Stroustrup's C++ Page
`_.
#. `Bruce Eckel's Thinking in C++, 2nd ed. Volume 2 Revision 4.0
(even better, get the book)
`_.
You are also encouraged to take a look at the :doc:`LLVM Coding Standards
` guide which focuses on how to write maintainable code more
than where to put your curly braces.
.. _resources:
Other useful references
-----------------------
#. `Using static and shared libraries across platforms
`_
.. _apis:
Important and useful LLVM APIs
==============================
Here we highlight some LLVM APIs that are generally useful and good to know
about when writing transformations.
.. _isa:
The ``isa<>``, ``cast<>`` and ``dyn_cast<>`` templates
------------------------------------------------------
The LLVM source-base makes extensive use of a custom form of RTTI. These
templates have many similarities to the C++ ``dynamic_cast<>`` operator, but
they don't have some drawbacks (primarily stemming from the fact that
``dynamic_cast<>`` only works on classes that have a v-table). Because they are
used so often, you must know what they do and how they work. All of these
templates are defined in the ``llvm/Support/Casting.h`` (`doxygen
`__) file (note that you very
rarely have to include this file directly).
``isa<>``:
The ``isa<>`` operator works exactly like the Java "``instanceof``" operator.
It returns true or false depending on whether a reference or pointer points to
an instance of the specified class. This can be very useful for constraint
checking of various sorts (example below).
``cast<>``:
The ``cast<>`` operator is a "checked cast" operation. It converts a pointer
or reference from a base class to a derived class, causing an assertion
failure if it is not really an instance of the right type. This should be
used in cases where you have some information that makes you believe that
something is of the right type. An example of the ``isa<>`` and ``cast<>``
template is:
.. code-block:: c++
static bool isLoopInvariant(const Value *V, const Loop *L) {
if (isa(V) || isa(V) || isa(V))
return true;
// Otherwise, it must be an instruction...
return !L->contains(cast(V)->getParent());
}
Note that you should **not** use an ``isa<>`` test followed by a ``cast<>``,
for that use the ``dyn_cast<>`` operator.
``dyn_cast<>``:
The ``dyn_cast<>`` operator is a "checking cast" operation. It checks to see
if the operand is of the specified type, and if so, returns a pointer to it
(this operator does not work with references). If the operand is not of the
correct type, a null pointer is returned. Thus, this works very much like
the ``dynamic_cast<>`` operator in C++, and should be used in the same
circumstances. Typically, the ``dyn_cast<>`` operator is used in an ``if``
statement or some other flow control statement like this:
.. code-block:: c++
if (auto *AI = dyn_cast(Val)) {
// ...
}
This form of the ``if`` statement effectively combines together a call to
``isa<>`` and a call to ``cast<>`` into one statement, which is very
convenient.
Note that the ``dyn_cast<>`` operator, like C++'s ``dynamic_cast<>`` or Java's
``instanceof`` operator, can be abused. In particular, you should not use big
chained ``if/then/else`` blocks to check for lots of different variants of
classes. If you find yourself wanting to do this, it is much cleaner and more
efficient to use the ``InstVisitor`` class to dispatch over the instruction
type directly.
``isa_and_nonnull<>``:
The ``isa_and_nonnull<>`` operator works just like the ``isa<>`` operator,
except that it allows for a null pointer as an argument (which it then
returns false). This can sometimes be useful, allowing you to combine several
null checks into one.
``cast_or_null<>``:
The ``cast_or_null<>`` operator works just like the ``cast<>`` operator,
except that it allows for a null pointer as an argument (which it then
propagates). This can sometimes be useful, allowing you to combine several
null checks into one.
``dyn_cast_or_null<>``:
The ``dyn_cast_or_null<>`` operator works just like the ``dyn_cast<>``
operator, except that it allows for a null pointer as an argument (which it
then propagates). This can sometimes be useful, allowing you to combine
several null checks into one.
These five templates can be used with any classes, whether they have a v-table
or not. If you want to add support for these templates, see the document
:doc:`How to set up LLVM-style RTTI for your class hierarchy
`
.. _string_apis:
Passing strings (the ``StringRef`` and ``Twine`` classes)
---------------------------------------------------------
Although LLVM generally does not do much string manipulation, we do have several
important APIs which take strings. Two important examples are the Value class
-- which has names for instructions, functions, etc. -- and the ``StringMap``
class which is used extensively in LLVM and Clang.
These are generic classes, and they need to be able to accept strings which may
have embedded null characters. Therefore, they cannot simply take a ``const
char *``, and taking a ``const std::string&`` requires clients to perform a heap
allocation which is usually unnecessary. Instead, many LLVM APIs use a
``StringRef`` or a ``const Twine&`` for passing strings efficiently.
.. _StringRef:
The ``StringRef`` class
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``StringRef`` data type represents a reference to a constant string (a
character array and a length) and supports the common operations available on
``std::string``, but does not require heap allocation.
It can be implicitly constructed using a C style null-terminated string, an
``std::string``, or explicitly with a character pointer and length. For
example, the ``StringRef`` find function is declared as:
.. code-block:: c++
iterator find(StringRef Key);
and clients can call it using any one of:
.. code-block:: c++
Map.find("foo"); // Lookup "foo"
Map.find(std::string("bar")); // Lookup "bar"
Map.find(StringRef("\0baz", 4)); // Lookup "\0baz"
Similarly, APIs which need to return a string may return a ``StringRef``
instance, which can be used directly or converted to an ``std::string`` using
the ``str`` member function. See ``llvm/ADT/StringRef.h`` (`doxygen
`__) for more
information.
You should rarely use the ``StringRef`` class directly, because it contains
pointers to external memory it is not generally safe to store an instance of the
class (unless you know that the external storage will not be freed).
``StringRef`` is small and pervasive enough in LLVM that it should always be
passed by value.
The ``Twine`` class
^^^^^^^^^^^^^^^^^^^
The ``Twine`` (`doxygen `__)
class is an efficient way for APIs to accept concatenated strings. For example,
a common LLVM paradigm is to name one instruction based on the name of another
instruction with a suffix, for example:
.. code-block:: c++
New = CmpInst::Create(..., SO->getName() + ".cmp");
The ``Twine`` class is effectively a lightweight `rope
`_ which points to
temporary (stack allocated) objects. Twines can be implicitly constructed as
the result of the plus operator applied to strings (i.e., a C strings, an
``std::string``, or a ``StringRef``). The twine delays the actual concatenation
of strings until it is actually required, at which point it can be efficiently
rendered directly into a character array. This avoids unnecessary heap
allocation involved in constructing the temporary results of string
concatenation. See ``llvm/ADT/Twine.h`` (`doxygen
`__) and :ref:`here `
for more information.
As with a ``StringRef``, ``Twine`` objects point to external memory and should
almost never be stored or mentioned directly. They are intended solely for use
when defining a function which should be able to efficiently accept concatenated
strings.
.. _formatting_strings:
Formatting strings (the ``formatv`` function)
---------------------------------------------
While LLVM doesn't necessarily do a lot of string manipulation and parsing, it
does do a lot of string formatting. From diagnostic messages, to llvm tool
outputs such as ``llvm-readobj`` to printing verbose disassembly listings and
LLDB runtime logging, the need for string formatting is pervasive.
The ``formatv`` is similar in spirit to ``printf``, but uses a different syntax
which borrows heavily from Python and C#. Unlike ``printf`` it deduces the type
to be formatted at compile time, so it does not need a format specifier such as
``%d``. This reduces the mental overhead of trying to construct portable format
strings, especially for platform-specific types like ``size_t`` or pointer types.
Unlike both ``printf`` and Python, it additionally fails to compile if LLVM does
not know how to format the type. These two properties ensure that the function
is both safer and simpler to use than traditional formatting methods such as
the ``printf`` family of functions.
Simple formatting
^^^^^^^^^^^^^^^^^
A call to ``formatv`` involves a single **format string** consisting of 0 or more
**replacement sequences**, followed by a variable length list of **replacement values**.
A replacement sequence is a string of the form ``{N[[,align]:style]}``.
``N`` refers to the 0-based index of the argument from the list of replacement
values. Note that this means it is possible to reference the same parameter
multiple times, possibly with different style and/or alignment options, in any order.
``align`` is an optional string specifying the width of the field to format
the value into, and the alignment of the value within the field. It is specified as
an optional **alignment style** followed by a positive integral **field width**. The
alignment style can be one of the characters ``-`` (left align), ``=`` (center align),
or ``+`` (right align). The default is right aligned.
``style`` is an optional string consisting of a type specific that controls the
formatting of the value. For example, to format a floating point value as a percentage,
you can use the style option ``P``.
Custom formatting
^^^^^^^^^^^^^^^^^
There are two ways to customize the formatting behavior for a type.
1. Provide a template specialization of ``llvm::format_provider`` for your
type ``T`` with the appropriate static format method.
.. code-block:: c++
namespace llvm {
template<>
struct format_provider {
static void format(const MyFooBar &V, raw_ostream &Stream, StringRef Style) {
// Do whatever is necessary to format `V` into `Stream`
}
};
void foo() {
MyFooBar X;
std::string S = formatv("{0}", X);
}
}
This is a useful extensibility mechanism for adding support for formatting your own
custom types with your own custom Style options. But it does not help when you want
to extend the mechanism for formatting a type that the library already knows how to
format. For that, we need something else.
2. Provide a **format adapter** inheriting from ``llvm::FormatAdapter``.
.. code-block:: c++
namespace anything {
struct format_int_custom : public llvm::FormatAdapter {
explicit format_int_custom(int N) : llvm::FormatAdapter(N) {}
void format(llvm::raw_ostream &Stream, StringRef Style) override {
// Do whatever is necessary to format ``this->Item`` into ``Stream``
}
};
}
namespace llvm {
void foo() {
std::string S = formatv("{0}", anything::format_int_custom(42));
}
}
If the type is detected to be derived from ``FormatAdapter``, ``formatv``
will call the
``format`` method on the argument passing in the specified style. This allows
one to provide custom formatting of any type, including one which already has
a builtin format provider.
``formatv`` Examples
^^^^^^^^^^^^^^^^^^^^
Below is intended to provide an incomplete set of examples demonstrating
the usage of ``formatv``. More information can be found by reading the
doxygen documentation or by looking at the unit test suite.
.. code-block:: c++
std::string S;
// Simple formatting of basic types and implicit string conversion.
S = formatv("{0} ({1:P})", 7, 0.35); // S == "7 (35.00%)"
// Out-of-order referencing and multi-referencing
outs() << formatv("{0} {2} {1} {0}", 1, "test", 3); // prints "1 3 test 1"
// Left, right, and center alignment
S = formatv("{0,7}", 'a'); // S == " a";
S = formatv("{0,-7}", 'a'); // S == "a ";
S = formatv("{0,=7}", 'a'); // S == " a ";
S = formatv("{0,+7}", 'a'); // S == " a";
// Custom styles
S = formatv("{0:N} - {0:x} - {1:E}", 12345, 123908342); // S == "12,345 - 0x3039 - 1.24E8"
// Adapters
S = formatv("{0}", fmt_align(42, AlignStyle::Center, 7)); // S == " 42 "
S = formatv("{0}", fmt_repeat("hi", 3)); // S == "hihihi"
S = formatv("{0}", fmt_pad("hi", 2, 6)); // S == " hi "
// Ranges
std::vector V = {8, 9, 10};
S = formatv("{0}", make_range(V.begin(), V.end())); // S == "8, 9, 10"
S = formatv("{0:$[+]}", make_range(V.begin(), V.end())); // S == "8+9+10"
S = formatv("{0:$[ + ]@[x]}", make_range(V.begin(), V.end())); // S == "0x8 + 0x9 + 0xA"
.. _error_apis:
Error handling
--------------
Proper error handling helps us identify bugs in our code, and helps end-users
understand errors in their tool usage. Errors fall into two broad categories:
*programmatic* and *recoverable*, with different strategies for handling and
reporting.
Programmatic Errors
^^^^^^^^^^^^^^^^^^^
Programmatic errors are violations of program invariants or API contracts, and
represent bugs within the program itself. Our aim is to document invariants, and
to abort quickly at the point of failure (providing some basic diagnostic) when
invariants are broken at runtime.
The fundamental tools for handling programmatic errors are assertions and the
llvm_unreachable function. Assertions are used to express invariant conditions,
and should include a message describing the invariant:
.. code-block:: c++
assert(isPhysReg(R) && "All virt regs should have been allocated already.");
The llvm_unreachable function can be used to document areas of control flow
that should never be entered if the program invariants hold:
.. code-block:: c++
enum { Foo, Bar, Baz } X = foo();
switch (X) {
case Foo: /* Handle Foo */; break;
case Bar: /* Handle Bar */; break;
default:
llvm_unreachable("X should be Foo or Bar here");
}
Recoverable Errors
^^^^^^^^^^^^^^^^^^
Recoverable errors represent an error in the program's environment, for example
a resource failure (a missing file, a dropped network connection, etc.), or
malformed input. These errors should be detected and communicated to a level of
the program where they can be handled appropriately. Handling the error may be
as simple as reporting the issue to the user, or it may involve attempts at
recovery.
.. note::
While it would be ideal to use this error handling scheme throughout
LLVM, there are places where this hasn't been practical to apply. In
situations where you absolutely must emit a non-programmatic error and
the ``Error`` model isn't workable you can call ``report_fatal_error``,
which will call installed error handlers, print a message, and abort the
program. The use of `report_fatal_error` in this case is discouraged.
Recoverable errors are modeled using LLVM's ``Error`` scheme. This scheme
represents errors using function return values, similar to classic C integer
error codes, or C++'s ``std::error_code``. However, the ``Error`` class is
actually a lightweight wrapper for user-defined error types, allowing arbitrary
information to be attached to describe the error. This is similar to the way C++
exceptions allow throwing of user-defined types.
Success values are created by calling ``Error::success()``, E.g.:
.. code-block:: c++
Error foo() {
// Do something.
// Return success.
return Error::success();
}
Success values are very cheap to construct and return - they have minimal
impact on program performance.
Failure values are constructed using ``make_error``, where ``T`` is any class
that inherits from the ErrorInfo utility, E.g.:
.. code-block:: c++
class BadFileFormat : public ErrorInfo {
public:
static char ID;
std::string Path;
BadFileFormat(StringRef Path) : Path(Path.str()) {}
void log(raw_ostream &OS) const override {
OS << Path << " is malformed";
}
std::error_code convertToErrorCode() const override {
return make_error_code(object_error::parse_failed);
}
};
char BadFileFormat::ID; // This should be declared in the C++ file.
Error printFormattedFile(StringRef Path) {
if ()
return make_error(Path);
// print file contents.
return Error::success();
}
Error values can be implicitly converted to bool: true for error, false for
success, enabling the following idiom:
.. code-block:: c++
Error mayFail();
Error foo() {
if (auto Err = mayFail())
return Err;
// Success! We can proceed.
...
For functions that can fail but need to return a value the ``Expected``
utility can be used. Values of this type can be constructed with either a
``T``, or an ``Error``. Expected values are also implicitly convertible to
boolean, but with the opposite convention to ``Error``: true for success, false
for error. If success, the ``T`` value can be accessed via the dereference
operator. If failure, the ``Error`` value can be extracted using the
``takeError()`` method. Idiomatic usage looks like:
.. code-block:: c++
Expected openFormattedFile(StringRef Path) {
// If badly formatted, return an error.
if (auto Err = checkFormat(Path))
return std::move(Err);
// Otherwise return a FormattedFile instance.
return FormattedFile(Path);
}
Error processFormattedFile(StringRef Path) {
// Try to open a formatted file
if (auto FileOrErr = openFormattedFile(Path)) {
// On success, grab a reference to the file and continue.
auto &File = *FileOrErr;
...
} else
// On error, extract the Error value and return it.
return FileOrErr.takeError();
}
If an ``Expected`` value is in success mode then the ``takeError()`` method
will return a success value. Using this fact, the above function can be
rewritten as:
.. code-block:: c++
Error processFormattedFile(StringRef Path) {
// Try to open a formatted file
auto FileOrErr = openFormattedFile(Path);
if (auto Err = FileOrErr.takeError())
// On error, extract the Error value and return it.
return Err;
// On success, grab a reference to the file and continue.
auto &File = *FileOrErr;
...
}
This second form is often more readable for functions that involve multiple
``Expected`` values as it limits the indentation required.
All ``Error`` instances, whether success or failure, must be either checked or
moved from (via ``std::move`` or a return) before they are destructed.
Accidentally discarding an unchecked error will cause a program abort at the
point where the unchecked value's destructor is run, making it easy to identify
and fix violations of this rule.
Success values are considered checked once they have been tested (by invoking
the boolean conversion operator):
.. code-block:: c++
if (auto Err = mayFail(...))
return Err; // Failure value - move error to caller.
// Safe to continue: Err was checked.
In contrast, the following code will always cause an abort, even if ``mayFail``
returns a success value:
.. code-block:: c++
mayFail();
// Program will always abort here, even if mayFail() returns Success, since
// the value is not checked.
Failure values are considered checked once a handler for the error type has
been activated:
.. code-block:: c++
handleErrors(
processFormattedFile(...),
[](const BadFileFormat &BFF) {
report("Unable to process " + BFF.Path + ": bad format");
},
[](const FileNotFound &FNF) {
report("File not found " + FNF.Path);
});
The ``handleErrors`` function takes an error as its first argument, followed by
a variadic list of "handlers", each of which must be a callable type (a
function, lambda, or class with a call operator) with one argument. The
``handleErrors`` function will visit each handler in the sequence and check its
argument type against the dynamic type of the error, running the first handler
that matches. This is the same decision process that is used decide which catch
clause to run for a C++ exception.
Since the list of handlers passed to ``handleErrors`` may not cover every error
type that can occur, the ``handleErrors`` function also returns an Error value
that must be checked or propagated. If the error value that is passed to
``handleErrors`` does not match any of the handlers it will be returned from
handleErrors. Idiomatic use of ``handleErrors`` thus looks like:
.. code-block:: c++
if (auto Err =
handleErrors(
processFormattedFile(...),
[](const BadFileFormat &BFF) {
report("Unable to process " + BFF.Path + ": bad format");
},
[](const FileNotFound &FNF) {
report("File not found " + FNF.Path);
}))
return Err;
In cases where you truly know that the handler list is exhaustive the
``handleAllErrors`` function can be used instead. This is identical to
``handleErrors`` except that it will terminate the program if an unhandled
error is passed in, and can therefore return void. The ``handleAllErrors``
function should generally be avoided: the introduction of a new error type
elsewhere in the program can easily turn a formerly exhaustive list of errors
into a non-exhaustive list, risking unexpected program termination. Where
possible, use handleErrors and propagate unknown errors up the stack instead.
For tool code, where errors can be handled by printing an error message then
exiting with an error code, the :ref:`ExitOnError ` utility
may be a better choice than handleErrors, as it simplifies control flow when
calling fallible functions.
In situations where it is known that a particular call to a fallible function
will always succeed (for example, a call to a function that can only fail on a
subset of inputs with an input that is known to be safe) the
:ref:`cantFail ` functions can be used to remove the error type,
simplifying control flow.
StringError
"""""""""""
Many kinds of errors have no recovery strategy, the only action that can be
taken is to report them to the user so that the user can attempt to fix the
environment. In this case representing the error as a string makes perfect
sense. LLVM provides the ``StringError`` class for this purpose. It takes two
arguments: A string error message, and an equivalent ``std::error_code`` for
interoperability. It also provides a ``createStringError`` function to simplify
common usage of this class:
.. code-block:: c++
// These two lines of code are equivalent:
make_error("Bad executable", errc::executable_format_error);
createStringError(errc::executable_format_error, "Bad executable");
If you're certain that the error you're building will never need to be converted
to a ``std::error_code`` you can use the ``inconvertibleErrorCode()`` function:
.. code-block:: c++
createStringError(inconvertibleErrorCode(), "Bad executable");
This should be done only after careful consideration. If any attempt is made to
convert this error to a ``std::error_code`` it will trigger immediate program
termination. Unless you are certain that your errors will not need
interoperability you should look for an existing ``std::error_code`` that you
can convert to, and even (as painful as it is) consider introducing a new one as
a stopgap measure.
``createStringError`` can take ``printf`` style format specifiers to provide a
formatted message:
.. code-block:: c++
createStringError(errc::executable_format_error,
"Bad executable: %s", FileName);
Interoperability with std::error_code and ErrorOr
"""""""""""""""""""""""""""""""""""""""""""""""""
Many existing LLVM APIs use ``std::error_code`` and its partner ``ErrorOr``
(which plays the same role as ``Expected``, but wraps a ``std::error_code``
rather than an ``Error``). The infectious nature of error types means that an
attempt to change one of these functions to return ``Error`` or ``Expected``
instead often results in an avalanche of changes to callers, callers of callers,
and so on. (The first such attempt, returning an ``Error`` from
MachOObjectFile's constructor, was abandoned after the diff reached 3000 lines,
impacted half a dozen libraries, and was still growing).
To solve this problem, the ``Error``/``std::error_code`` interoperability requirement was
introduced. Two pairs of functions allow any ``Error`` value to be converted to a
``std::error_code``, any ``Expected`` to be converted to an ``ErrorOr``, and vice
versa:
.. code-block:: c++
std::error_code errorToErrorCode(Error Err);
Error errorCodeToError(std::error_code EC);
template ErrorOr expectedToErrorOr(Expected TOrErr);
template Expected errorOrToExpected(ErrorOr TOrEC);
Using these APIs it is easy to make surgical patches that update individual
functions from ``std::error_code`` to ``Error``, and from ``ErrorOr`` to
``Expected``.
Returning Errors from error handlers
""""""""""""""""""""""""""""""""""""
Error recovery attempts may themselves fail. For that reason, ``handleErrors``
actually recognises three different forms of handler signature:
.. code-block:: c++
// Error must be handled, no new errors produced:
void(UserDefinedError &E);
// Error must be handled, new errors can be produced:
Error(UserDefinedError &E);
// Original error can be inspected, then re-wrapped and returned (or a new
// error can be produced):
Error(std::unique_ptr E);
Any error returned from a handler will be returned from the ``handleErrors``
function so that it can be handled itself, or propagated up the stack.
.. _err_exitonerr:
Using ExitOnError to simplify tool code
"""""""""""""""""""""""""""""""""""""""
Library code should never call ``exit`` for a recoverable error, however in tool
code (especially command line tools) this can be a reasonable approach. Calling
``exit`` upon encountering an error dramatically simplifies control flow as the
error no longer needs to be propagated up the stack. This allows code to be
written in straight-line style, as long as each fallible call is wrapped in a
check and call to exit. The ``ExitOnError`` class supports this pattern by
providing call operators that inspect ``Error`` values, stripping the error away
in the success case and logging to ``stderr`` then exiting in the failure case.
To use this class, declare a global ``ExitOnError`` variable in your program:
.. code-block:: c++
ExitOnError ExitOnErr;
Calls to fallible functions can then be wrapped with a call to ``ExitOnErr``,
turning them into non-failing calls:
.. code-block:: c++
Error mayFail();
Expected mayFail2();
void foo() {
ExitOnErr(mayFail());
int X = ExitOnErr(mayFail2());
}
On failure, the error's log message will be written to ``stderr``, optionally
preceded by a string "banner" that can be set by calling the setBanner method. A
mapping can also be supplied from ``Error`` values to exit codes using the
``setExitCodeMapper`` method:
.. code-block:: c++
int main(int argc, char *argv[]) {
ExitOnErr.setBanner(std::string(argv[0]) + " error:");
ExitOnErr.setExitCodeMapper(
[](const Error &Err) {
if (Err.isA())
return 2;
return 1;
});
Use ``ExitOnError`` in your tool code where possible as it can greatly improve
readability.
.. _err_cantfail:
Using cantFail to simplify safe callsites
"""""""""""""""""""""""""""""""""""""""""
Some functions may only fail for a subset of their inputs, so calls using known
safe inputs can be assumed to succeed.
The cantFail functions encapsulate this by wrapping an assertion that their
argument is a success value and, in the case of Expected, unwrapping the
T value:
.. code-block:: c++
Error onlyFailsForSomeXValues(int X);
Expected onlyFailsForSomeXValues2(int X);
void foo() {
cantFail(onlyFailsForSomeXValues(KnownSafeValue));
int Y = cantFail(onlyFailsForSomeXValues2(KnownSafeValue));
...
}
Like the ExitOnError utility, cantFail simplifies control flow. Their treatment
of error cases is very different however: Where ExitOnError is guaranteed to
terminate the program on an error input, cantFail simply asserts that the result
is success. In debug builds this will result in an assertion failure if an error
is encountered. In release builds the behavior of cantFail for failure values is
undefined. As such, care must be taken in the use of cantFail: clients must be
certain that a cantFail wrapped call really can not fail with the given
arguments.
Use of the cantFail functions should be rare in library code, but they are
likely to be of more use in tool and unit-test code where inputs and/or
mocked-up classes or functions may be known to be safe.
Fallible constructors
"""""""""""""""""""""
Some classes require resource acquisition or other complex initialization that
can fail during construction. Unfortunately constructors can't return errors,
and having clients test objects after they're constructed to ensure that they're
valid is error prone as it's all too easy to forget the test. To work around
this, use the named constructor idiom and return an ``Expected``:
.. code-block:: c++
class Foo {
public:
static Expected Create(Resource R1, Resource R2) {
Error Err = Error::success();
Foo F(R1, R2, Err);
if (Err)
return std::move(Err);
return std::move(F);
}
private:
Foo(Resource R1, Resource R2, Error &Err) {
ErrorAsOutParameter EAO(&Err);
if (auto Err2 = R1.acquire()) {
Err = std::move(Err2);
return;
}
Err = R2.acquire();
}
};
Here, the named constructor passes an ``Error`` by reference into the actual
constructor, which the constructor can then use to return errors. The
``ErrorAsOutParameter`` utility sets the ``Error`` value's checked flag on entry
to the constructor so that the error can be assigned to, then resets it on exit
to force the client (the named constructor) to check the error.
By using this idiom, clients attempting to construct a Foo receive either a
well-formed Foo or an Error, never an object in an invalid state.
Propagating and consuming errors based on types
"""""""""""""""""""""""""""""""""""""""""""""""
In some contexts, certain types of error are known to be benign. For example,
when walking an archive, some clients may be happy to skip over badly formatted
object files rather than terminating the walk immediately. Skipping badly
formatted objects could be achieved using an elaborate handler method, but the
Error.h header provides two utilities that make this idiom much cleaner: the
type inspection method, ``isA``, and the ``consumeError`` function:
.. code-block:: c++
Error walkArchive(Archive A) {
for (unsigned I = 0; I != A.numMembers(); ++I) {
auto ChildOrErr = A.getMember(I);
if (auto Err = ChildOrErr.takeError()) {
if (Err.isA())
consumeError(std::move(Err))
else
return Err;
}
auto &Child = *ChildOrErr;
// Use Child
...
}
return Error::success();
}
Concatenating Errors with joinErrors
""""""""""""""""""""""""""""""""""""
In the archive walking example above ``BadFileFormat`` errors are simply
consumed and ignored. If the client had wanted report these errors after
completing the walk over the archive they could use the ``joinErrors`` utility:
.. code-block:: c++
Error walkArchive(Archive A) {
Error DeferredErrs = Error::success();
for (unsigned I = 0; I != A.numMembers(); ++I) {
auto ChildOrErr = A.getMember(I);
if (auto Err = ChildOrErr.takeError())
if (Err.isA())
DeferredErrs = joinErrors(std::move(DeferredErrs), std::move(Err));
else
return Err;
auto &Child = *ChildOrErr;
// Use Child
...
}
return DeferredErrs;
}
The ``joinErrors`` routine builds a special error type called ``ErrorList``,
which holds a list of user defined errors. The ``handleErrors`` routine
recognizes this type and will attempt to handle each of the contained errors in
order. If all contained errors can be handled, ``handleErrors`` will return
``Error::success()``, otherwise ``handleErrors`` will concatenate the remaining
errors and return the resulting ``ErrorList``.
Building fallible iterators and iterator ranges
"""""""""""""""""""""""""""""""""""""""""""""""
The archive walking examples above retrieve archive members by index, however
this requires considerable boiler-plate for iteration and error checking. We can
clean this up by using the "fallible iterator" pattern, which supports the
following natural iteration idiom for fallible containers like Archive:
.. code-block:: c++
Error Err = Error::success();
for (auto &Child : Ar->children(Err)) {
// Use Child - only enter the loop when it's valid
// Allow early exit from the loop body, since we know that Err is success
// when we're inside the loop.
if (BailOutOn(Child))
return;
...
}
// Check Err after the loop to ensure it didn't break due to an error.
if (Err)
return Err;
To enable this idiom, iterators over fallible containers are written in a
natural style, with their ``++`` and ``--`` operators replaced with fallible
``Error inc()`` and ``Error dec()`` functions. E.g.:
.. code-block:: c++
class FallibleChildIterator {
public:
FallibleChildIterator(Archive &A, unsigned ChildIdx);
Archive::Child &operator*();
friend bool operator==(const ArchiveIterator &LHS,
const ArchiveIterator &RHS);
// operator++/operator-- replaced with fallible increment / decrement:
Error inc() {
if (!A.childValid(ChildIdx + 1))
return make_error(...);
++ChildIdx;
return Error::success();
}
Error dec() { ... }
};
Instances of this kind of fallible iterator interface are then wrapped with the
fallible_iterator utility which provides ``operator++`` and ``operator--``,
returning any errors via a reference passed in to the wrapper at construction
time. The fallible_iterator wrapper takes care of (a) jumping to the end of the
range on error, and (b) marking the error as checked whenever an iterator is
compared to ``end`` and found to be inequal (in particular: this marks the
error as checked throughout the body of a range-based for loop), enabling early
exit from the loop without redundant error checking.
Instances of the fallible iterator interface (e.g. FallibleChildIterator above)
are wrapped using the ``make_fallible_itr`` and ``make_fallible_end``
functions. E.g.:
.. code-block:: c++
class Archive {
public:
using child_iterator = fallible_iterator;
child_iterator child_begin(Error &Err) {
return make_fallible_itr(FallibleChildIterator(*this, 0), Err);
}
child_iterator child_end() {
return make_fallible_end(FallibleChildIterator(*this, size()));
}
iterator_range children(Error &Err) {
return make_range(child_begin(Err), child_end());
}
};
Using the fallible_iterator utility allows for both natural construction of
fallible iterators (using failing ``inc`` and ``dec`` operations) and
relatively natural use of c++ iterator/loop idioms.
.. _function_apis:
More information on Error and its related utilities can be found in the
Error.h header file.
Passing functions and other callable objects
--------------------------------------------
Sometimes you may want a function to be passed a callback object. In order to
support lambda expressions and other function objects, you should not use the
traditional C approach of taking a function pointer and an opaque cookie:
.. code-block:: c++
void takeCallback(bool (*Callback)(Function *, void *), void *Cookie);
Instead, use one of the following approaches:
Function template
^^^^^^^^^^^^^^^^^
If you don't mind putting the definition of your function into a header file,
make it a function template that is templated on the callable type.
.. code-block:: c++
template
void takeCallback(Callable Callback) {
Callback(1, 2, 3);
}
The ``function_ref`` class template
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``function_ref``
(`doxygen `__) class
template represents a reference to a callable object, templated over the type
of the callable. This is a good choice for passing a callback to a function,
if you don't need to hold onto the callback after the function returns. In this
way, ``function_ref`` is to ``std::function`` as ``StringRef`` is to
``std::string``.
``function_ref`` can be implicitly constructed from
any callable object that can be called with arguments of type ``Param1``,
``Param2``, ..., and returns a value that can be converted to type ``Ret``.
For example:
.. code-block:: c++
void visitBasicBlocks(Function *F, function_ref Callback) {
for (BasicBlock &BB : *F)
if (Callback(&BB))
return;
}
can be called using:
.. code-block:: c++
visitBasicBlocks(F, [&](BasicBlock *BB) {
if (process(BB))
return isEmpty(BB);
return false;
});
Note that a ``function_ref`` object contains pointers to external memory, so it
is not generally safe to store an instance of the class (unless you know that
the external storage will not be freed). If you need this ability, consider
using ``std::function``. ``function_ref`` is small enough that it should always
be passed by value.
.. _DEBUG:
The ``LLVM_DEBUG()`` macro and ``-debug`` option
------------------------------------------------
Often when working on your pass you will put a bunch of debugging printouts and
other code into your pass. After you get it working, you want to remove it, but
you may need it again in the future (to work out new bugs that you run across).
Naturally, because of this, you don't want to delete the debug printouts, but
you don't want them to always be noisy. A standard compromise is to comment
them out, allowing you to enable them if you need them in the future.
The ``llvm/Support/Debug.h`` (`doxygen
`__) file provides a macro named
``LLVM_DEBUG()`` that is a much nicer solution to this problem. Basically, you can
put arbitrary code into the argument of the ``LLVM_DEBUG`` macro, and it is only
executed if '``opt``' (or any other tool) is run with the '``-debug``' command
line argument:
.. code-block:: c++
LLVM_DEBUG(dbgs() << "I am here!\n");
Then you can run your pass like this:
.. code-block:: none
$ opt < a.bc > /dev/null -mypass
$ opt < a.bc > /dev/null -mypass -debug
I am here!
Using the ``LLVM_DEBUG()`` macro instead of a home-brewed solution allows you to not
have to create "yet another" command line option for the debug output for your
pass. Note that ``LLVM_DEBUG()`` macros are disabled for non-asserts builds, so they
do not cause a performance impact at all (for the same reason, they should also
not contain side-effects!).
One additional nice thing about the ``LLVM_DEBUG()`` macro is that you can enable or
disable it directly in gdb. Just use "``set DebugFlag=0``" or "``set
DebugFlag=1``" from the gdb if the program is running. If the program hasn't
been started yet, you can always just run it with ``-debug``.
.. _DEBUG_TYPE:
Fine grained debug info with ``DEBUG_TYPE`` and the ``-debug-only`` option
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes you may find yourself in a situation where enabling ``-debug`` just
turns on **too much** information (such as when working on the code generator).
If you want to enable debug information with more fine-grained control, you
should define the ``DEBUG_TYPE`` macro and use the ``-debug-only`` option as
follows:
.. code-block:: c++
#define DEBUG_TYPE "foo"
LLVM_DEBUG(dbgs() << "'foo' debug type\n");
#undef DEBUG_TYPE
#define DEBUG_TYPE "bar"
LLVM_DEBUG(dbgs() << "'bar' debug type\n");
#undef DEBUG_TYPE
Then you can run your pass like this:
.. code-block:: none
$ opt < a.bc > /dev/null -mypass
$ opt < a.bc > /dev/null -mypass -debug
'foo' debug type
'bar' debug type
$ opt < a.bc > /dev/null -mypass -debug-only=foo
'foo' debug type
$ opt < a.bc > /dev/null -mypass -debug-only=bar
'bar' debug type
$ opt < a.bc > /dev/null -mypass -debug-only=foo,bar
'foo' debug type
'bar' debug type
Of course, in practice, you should only set ``DEBUG_TYPE`` at the top of a file,
to specify the debug type for the entire module. Be careful that you only do
this after including Debug.h and not around any #include of headers. Also, you
should use names more meaningful than "foo" and "bar", because there is no
system in place to ensure that names do not conflict. If two different modules
use the same string, they will all be turned on when the name is specified.
This allows, for example, all debug information for instruction scheduling to be
enabled with ``-debug-only=InstrSched``, even if the source lives in multiple
files. The name must not include a comma (,) as that is used to separate the
arguments of the ``-debug-only`` option.
For performance reasons, -debug-only is not available in optimized build
(``--enable-optimized``) of LLVM.
The ``DEBUG_WITH_TYPE`` macro is also available for situations where you would
like to set ``DEBUG_TYPE``, but only for one specific ``DEBUG`` statement. It
takes an additional first parameter, which is the type to use. For example, the
preceding example could be written as:
.. code-block:: c++
DEBUG_WITH_TYPE("foo", dbgs() << "'foo' debug type\n");
DEBUG_WITH_TYPE("bar", dbgs() << "'bar' debug type\n");
.. _Statistic:
The ``Statistic`` class & ``-stats`` option
-------------------------------------------
The ``llvm/ADT/Statistic.h`` (`doxygen
`__) file provides a class
named ``Statistic`` that is used as a unified way to keep track of what the LLVM
compiler is doing and how effective various optimizations are. It is useful to
see what optimizations are contributing to making a particular program run
faster.
Often you may run your pass on some big program, and you're interested to see
how many times it makes a certain transformation. Although you can do this with
hand inspection, or some ad-hoc method, this is a real pain and not very useful
for big programs. Using the ``Statistic`` class makes it very easy to keep
track of this information, and the calculated information is presented in a
uniform manner with the rest of the passes being executed.
There are many examples of ``Statistic`` uses, but the basics of using it are as
follows:
Define your statistic like this:
.. code-block:: c++
#define DEBUG_TYPE "mypassname" // This goes before any #includes.
STATISTIC(NumXForms, "The # of times I did stuff");
The ``STATISTIC`` macro defines a static variable, whose name is specified by
the first argument. The pass name is taken from the ``DEBUG_TYPE`` macro, and
the description is taken from the second argument. The variable defined
("NumXForms" in this case) acts like an unsigned integer.
Whenever you make a transformation, bump the counter:
.. code-block:: c++
++NumXForms; // I did stuff!
That's all you have to do. To get '``opt``' to print out the statistics
gathered, use the '``-stats``' option:
.. code-block:: none
$ opt -stats -mypassname < program.bc > /dev/null
... statistics output ...
Note that in order to use the '``-stats``' option, LLVM must be
compiled with assertions enabled.
When running ``opt`` on a C file from the SPEC benchmark suite, it gives a
report that looks like this:
.. code-block:: none
7646 bitcodewriter - Number of normal instructions
725 bitcodewriter - Number of oversized instructions
129996 bitcodewriter - Number of bitcode bytes written
2817 raise - Number of insts DCEd or constprop'd
3213 raise - Number of cast-of-self removed
5046 raise - Number of expression trees converted
75 raise - Number of other getelementptr's formed
138 raise - Number of load/store peepholes
42 deadtypeelim - Number of unused typenames removed from symtab
392 funcresolve - Number of varargs functions resolved
27 globaldce - Number of global variables removed
2 adce - Number of basic blocks removed
134 cee - Number of branches revectored
49 cee - Number of setcc instruction eliminated
532 gcse - Number of loads removed
2919 gcse - Number of instructions removed
86 indvars - Number of canonical indvars added
87 indvars - Number of aux indvars removed
25 instcombine - Number of dead inst eliminate
434 instcombine - Number of insts combined
248 licm - Number of load insts hoisted
1298 licm - Number of insts hoisted to a loop pre-header
3 licm - Number of insts hoisted to multiple loop preds (bad, no loop pre-header)
75 mem2reg - Number of alloca's promoted
1444 cfgsimplify - Number of blocks simplified
Obviously, with so many optimizations, having a unified framework for this stuff
is very nice. Making your pass fit well into the framework makes it more
maintainable and useful.
.. _DebugCounters:
Adding debug counters to aid in debugging your code
---------------------------------------------------
Sometimes, when writing new passes, or trying to track down bugs, it
is useful to be able to control whether certain things in your pass
happen or not. For example, there are times the minimization tooling
can only easily give you large testcases. You would like to narrow
your bug down to a specific transformation happening or not happening,
automatically, using bisection. This is where debug counters help.
They provide a framework for making parts of your code only execute a
certain number of times.
The ``llvm/Support/DebugCounter.h`` (`doxygen
`__) file
provides a class named ``DebugCounter`` that can be used to create
command line counter options that control execution of parts of your code.
Define your DebugCounter like this:
.. code-block:: c++
DEBUG_COUNTER(DeleteAnInstruction, "passname-delete-instruction",
"Controls which instructions get delete");
The ``DEBUG_COUNTER`` macro defines a static variable, whose name
is specified by the first argument. The name of the counter
(which is used on the command line) is specified by the second
argument, and the description used in the help is specified by the
third argument.
Whatever code you want that control, use ``DebugCounter::shouldExecute`` to control it.
.. code-block:: c++
if (DebugCounter::shouldExecute(DeleteAnInstruction))
I->eraseFromParent();
That's all you have to do. Now, using opt, you can control when this code triggers using
the '``--debug-counter``' option. There are two counters provided, ``skip`` and ``count``.
``skip`` is the number of times to skip execution of the codepath. ``count`` is the number
of times, once we are done skipping, to execute the codepath.
.. code-block:: none
$ opt --debug-counter=passname-delete-instruction-skip=1,passname-delete-instruction-count=2 -passname
This will skip the above code the first time we hit it, then execute it twice, then skip the rest of the executions.
So if executed on the following code:
.. code-block:: llvm
%1 = add i32 %a, %b
%2 = add i32 %a, %b
%3 = add i32 %a, %b
%4 = add i32 %a, %b
It would delete number ``%2`` and ``%3``.
A utility is provided in `utils/bisect-skip-count` to binary search
skip and count arguments. It can be used to automatically minimize the
skip and count for a debug-counter variable.
.. _ViewGraph:
Viewing graphs while debugging code
-----------------------------------
Several of the important data structures in LLVM are graphs: for example CFGs
made out of LLVM :ref:`BasicBlocks `, CFGs made out of LLVM
:ref:`MachineBasicBlocks `, and :ref:`Instruction Selection
DAGs `. In many cases, while debugging various parts of the
compiler, it is nice to instantly visualize these graphs.
LLVM provides several callbacks that are available in a debug build to do
exactly that. If you call the ``Function::viewCFG()`` method, for example, the
current LLVM tool will pop up a window containing the CFG for the function where
each basic block is a node in the graph, and each node contains the instructions
in the block. Similarly, there also exists ``Function::viewCFGOnly()`` (does
not include the instructions), the ``MachineFunction::viewCFG()`` and
``MachineFunction::viewCFGOnly()``, and the ``SelectionDAG::viewGraph()``
methods. Within GDB, for example, you can usually use something like ``call
DAG.viewGraph()`` to pop up a window. Alternatively, you can sprinkle calls to
these functions in your code in places you want to debug.
Getting this to work requires a small amount of setup. On Unix systems
with X11, install the `graphviz `_ toolkit, and make
sure 'dot' and 'gv' are in your path. If you are running on macOS, download
and install the macOS `Graphviz program
`_ and add
``/Applications/Graphviz.app/Contents/MacOS/`` (or wherever you install it) to
your path. The programs need not be present when configuring, building or
running LLVM and can simply be installed when needed during an active debug
session.
``SelectionDAG`` has been extended to make it easier to locate *interesting*
nodes in large complex graphs. From gdb, if you ``call DAG.setGraphColor(node,
"color")``, then the next ``call DAG.viewGraph()`` would highlight the node in
the specified color (choices of colors can be found at `colors
`_.) More complex node attributes
can be provided with ``call DAG.setGraphAttrs(node, "attributes")`` (choices can
be found at `Graph attributes `_.)
If you want to restart and clear all the current graph attributes, then you can
``call DAG.clearGraphAttrs()``.
Note that graph visualization features are compiled out of Release builds to
reduce file size. This means that you need a Debug+Asserts or Release+Asserts
build to use these features.
.. _datastructure:
Picking the Right Data Structure for a Task
===========================================
LLVM has a plethora of data structures in the ``llvm/ADT/`` directory, and we
commonly use STL data structures. This section describes the trade-offs you
should consider when you pick one.
The first step is a choose your own adventure: do you want a sequential
container, a set-like container, or a map-like container? The most important
thing when choosing a container is the algorithmic properties of how you plan to
access the container. Based on that, you should use:
* a :ref:`map-like ` container if you need efficient look-up of a
value based on another value. Map-like containers also support efficient
queries for containment (whether a key is in the map). Map-like containers
generally do not support efficient reverse mapping (values to keys). If you
need that, use two maps. Some map-like containers also support efficient
iteration through the keys in sorted order. Map-like containers are the most
expensive sort, only use them if you need one of these capabilities.
* a :ref:`set-like ` container if you need to put a bunch of stuff into
a container that automatically eliminates duplicates. Some set-like
containers support efficient iteration through the elements in sorted order.
Set-like containers are more expensive than sequential containers.
* a :ref:`sequential ` container provides the most efficient way
to add elements and keeps track of the order they are added to the collection.
They permit duplicates and support efficient iteration, but do not support
efficient look-up based on a key.
* a :ref:`string ` container is a specialized sequential container or
reference structure that is used for character or byte arrays.
* a :ref:`bit ` container provides an efficient way to store and
perform set operations on sets of numeric id's, while automatically
eliminating duplicates. Bit containers require a maximum of 1 bit for each
identifier you want to store.
Once the proper category of container is determined, you can fine tune the
memory use, constant factors, and cache behaviors of access by intelligently
picking a member of the category. Note that constant factors and cache behavior
can be a big deal. If you have a vector that usually only contains a few
elements (but could contain many), for example, it's much better to use
:ref:`SmallVector ` than :ref:`vector `. Doing so
avoids (relatively) expensive malloc/free calls, which dwarf the cost of adding
the elements to the container.
.. _ds_sequential:
Sequential Containers (std::vector, std::list, etc)
---------------------------------------------------
There are a variety of sequential containers available for you, based on your
needs. Pick the first in this section that will do what you want.
.. _dss_arrayref:
llvm/ADT/ArrayRef.h
^^^^^^^^^^^^^^^^^^^
The ``llvm::ArrayRef`` class is the preferred class to use in an interface that
accepts a sequential list of elements in memory and just reads from them. By
taking an ``ArrayRef``, the API can be passed a fixed size array, an
``std::vector``, an ``llvm::SmallVector`` and anything else that is contiguous
in memory.
.. _dss_fixedarrays:
Fixed Size Arrays
^^^^^^^^^^^^^^^^^
Fixed size arrays are very simple and very fast. They are good if you know
exactly how many elements you have, or you have a (low) upper bound on how many
you have.
.. _dss_heaparrays:
Heap Allocated Arrays
^^^^^^^^^^^^^^^^^^^^^
Heap allocated arrays (``new[]`` + ``delete[]``) are also simple. They are good
if the number of elements is variable, if you know how many elements you will
need before the array is allocated, and if the array is usually large (if not,
consider a :ref:`SmallVector `). The cost of a heap allocated
array is the cost of the new/delete (aka malloc/free). Also note that if you
are allocating an array of a type with a constructor, the constructor and
destructors will be run for every element in the array (re-sizable vectors only
construct those elements actually used).
.. _dss_tinyptrvector:
llvm/ADT/TinyPtrVector.h
^^^^^^^^^^^^^^^^^^^^^^^^
``TinyPtrVector`` is a highly specialized collection class that is
optimized to avoid allocation in the case when a vector has zero or one
elements. It has two major restrictions: 1) it can only hold values of pointer
type, and 2) it cannot hold a null pointer.
Since this container is highly specialized, it is rarely used.
.. _dss_smallvector:
llvm/ADT/SmallVector.h
^^^^^^^^^^^^^^^^^^^^^^
``SmallVector`` is a simple class that looks and smells just like
``vector``: it supports efficient iteration, lays out elements in memory
order (so you can do pointer arithmetic between elements), supports efficient
push_back/pop_back operations, supports efficient random access to its elements,
etc.
The main advantage of SmallVector is that it allocates space for some number of
elements (N) **in the object itself**. Because of this, if the SmallVector is
dynamically smaller than N, no malloc is performed. This can be a big win in
cases where the malloc/free call is far more expensive than the code that
fiddles around with the elements.
This is good for vectors that are "usually small" (e.g. the number of
predecessors/successors of a block is usually less than 8). On the other hand,
this makes the size of the SmallVector itself large, so you don't want to
allocate lots of them (doing so will waste a lot of space). As such,
SmallVectors are most useful when on the stack.
In the absence of a well-motivated choice for the number of
inlined elements ``N``, it is recommended to use ``SmallVector`` (that is,
omitting the ``N``). This will choose a default number of
inlined elements reasonable for allocation on the stack (for example, trying
to keep ``sizeof(SmallVector)`` around 64 bytes).
SmallVector also provides a nice portable and efficient replacement for
``alloca``.
SmallVector has grown a few other minor advantages over std::vector, causing
``SmallVector`` to be preferred over ``std::vector``.
#. std::vector is exception-safe, and some implementations have pessimizations
that copy elements when SmallVector would move them.
#. SmallVector understands ``std::is_trivially_copyable`` and uses realloc aggressively.
#. Many LLVM APIs take a SmallVectorImpl as an out parameter (see the note
below).
#. SmallVector with N equal to 0 is smaller than std::vector on 64-bit
platforms, since it uses ``unsigned`` (instead of ``void*``) for its size
and capacity.
.. note::
Prefer to use ``ArrayRef`` or ``SmallVectorImpl`` as a parameter type.
It's rarely appropriate to use ``SmallVector`` as a parameter type.
If an API only reads from the vector, it should use :ref:`ArrayRef
`. Even if an API updates the vector the "small size" is
unlikely to be relevant; such an API should use the ``SmallVectorImpl``
class, which is the "vector header" (and methods) without the elements
allocated after it. Note that ``SmallVector`` inherits from
``SmallVectorImpl`` so the conversion is implicit and costs nothing. E.g.
.. code-block:: c++
// DISCOURAGED: Clients cannot pass e.g. raw arrays.
hardcodedContiguousStorage(const SmallVectorImpl &In);
// ENCOURAGED: Clients can pass any contiguous storage of Foo.
allowsAnyContiguousStorage(ArrayRef In);
void someFunc1() {
Foo Vec[] = { /* ... */ };
hardcodedContiguousStorage(Vec); // Error.
allowsAnyContiguousStorage(Vec); // Works.
}
// DISCOURAGED: Clients cannot pass e.g. SmallVector.
hardcodedSmallSize(SmallVector &Out);
// ENCOURAGED: Clients can pass any SmallVector.
allowsAnySmallSize(SmallVectorImpl &Out);
void someFunc2() {
SmallVector Vec;
hardcodedSmallSize(Vec); // Error.
allowsAnySmallSize(Vec); // Works.
}
Even though it has "``Impl``" in the name, SmallVectorImpl is widely used
and is no longer "private to the implementation". A name like
``SmallVectorHeader`` might be more appropriate.
.. _dss_vector:
^^^^^^^^
``std::vector`` is well loved and respected. However, ``SmallVector``
is often a better option due to the advantages listed above. std::vector is
still useful when you need to store more than ``UINT32_MAX`` elements or when
interfacing with code that expects vectors :).
One worthwhile note about std::vector: avoid code like this:
.. code-block:: c++
for ( ... ) {
std::vector V;
// make use of V.
}
Instead, write this as:
.. code-block:: c++
std::vector V;
for ( ... ) {
// make use of V.
V.clear();
}
Doing so will save (at least) one heap allocation and free per iteration of the
loop.
.. _dss_deque:
^^^^^^^
``std::deque`` is, in some senses, a generalized version of ``std::vector``.
Like ``std::vector``, it provides constant time random access and other similar
properties, but it also provides efficient access to the front of the list. It
does not guarantee continuity of elements within memory.
In exchange for this extra flexibility, ``std::deque`` has significantly higher
constant factor costs than ``std::vector``. If possible, use ``std::vector`` or
something cheaper.
.. _dss_list:
^^^^^^
``std::list`` is an extremely inefficient class that is rarely useful. It
performs a heap allocation for every element inserted into it, thus having an
extremely high constant factor, particularly for small data types.
``std::list`` also only supports bidirectional iteration, not random access
iteration.
In exchange for this high cost, std::list supports efficient access to both ends
of the list (like ``std::deque``, but unlike ``std::vector`` or
``SmallVector``). In addition, the iterator invalidation characteristics of
std::list are stronger than that of a vector class: inserting or removing an
element into the list does not invalidate iterator or pointers to other elements
in the list.
.. _dss_ilist:
llvm/ADT/ilist.h
^^^^^^^^^^^^^^^^
``ilist`` implements an 'intrusive' doubly-linked list. It is intrusive,
because it requires the element to store and provide access to the prev/next
pointers for the list.
``ilist`` has the same drawbacks as ``std::list``, and additionally requires an
``ilist_traits`` implementation for the element type, but it provides some novel
characteristics. In particular, it can efficiently store polymorphic objects,
the traits class is informed when an element is inserted or removed from the
list, and ``ilist``\ s are guaranteed to support a constant-time splice
operation.
These properties are exactly what we want for things like ``Instruction``\ s and
basic blocks, which is why these are implemented with ``ilist``\ s.
Related classes of interest are explained in the following subsections:
* :ref:`ilist_traits `
* :ref:`iplist `
* :ref:`llvm/ADT/ilist_node.h `
* :ref:`Sentinels `
.. _dss_packedvector:
llvm/ADT/PackedVector.h
^^^^^^^^^^^^^^^^^^^^^^^
Useful for storing a vector of values using only a few number of bits for each
value. Apart from the standard operations of a vector-like container, it can
also perform an 'or' set operation.
For example:
.. code-block:: c++
enum State {
None = 0x0,
FirstCondition = 0x1,
SecondCondition = 0x2,
Both = 0x3
};
State get() {
PackedVector Vec1;
Vec1.push_back(FirstCondition);
PackedVector Vec2;
Vec2.push_back(SecondCondition);
Vec1 |= Vec2;
return Vec1[0]; // returns 'Both'.
}
.. _dss_ilist_traits:
ilist_traits
^^^^^^^^^^^^
``ilist_traits`` is ``ilist``'s customization mechanism. ``iplist``
(and consequently ``ilist``) publicly derive from this traits class.
.. _dss_iplist:
iplist
^^^^^^
``iplist`` is ``ilist``'s base and as such supports a slightly narrower
interface. Notably, inserters from ``T&`` are absent.
``ilist_traits`` is a public base of this class and can be used for a wide
variety of customizations.
.. _dss_ilist_node:
llvm/ADT/ilist_node.h
^^^^^^^^^^^^^^^^^^^^^
``ilist_node`` implements the forward and backward links that are expected
by the ``ilist`` (and analogous containers) in the default manner.
``ilist_node``\ s are meant to be embedded in the node type ``T``, usually
``T`` publicly derives from ``ilist_node``.
.. _dss_ilist_sentinel:
Sentinels
^^^^^^^^^
``ilist``\ s have another specialty that must be considered. To be a good
citizen in the C++ ecosystem, it needs to support the standard container
operations, such as ``begin`` and ``end`` iterators, etc. Also, the
``operator--`` must work correctly on the ``end`` iterator in the case of
non-empty ``ilist``\ s.
The only sensible solution to this problem is to allocate a so-called *sentinel*
along with the intrusive list, which serves as the ``end`` iterator, providing
the back-link to the last element. However conforming to the C++ convention it
is illegal to ``operator++`` beyond the sentinel and it also must not be
dereferenced.
These constraints allow for some implementation freedom to the ``ilist`` how to
allocate and store the sentinel. The corresponding policy is dictated by
``ilist_traits``. By default a ``T`` gets heap-allocated whenever the need
for a sentinel arises.
While the default policy is sufficient in most cases, it may break down when
``T`` does not provide a default constructor. Also, in the case of many
instances of ``ilist``\ s, the memory overhead of the associated sentinels is
wasted. To alleviate the situation with numerous and voluminous
``T``-sentinels, sometimes a trick is employed, leading to *ghostly sentinels*.
Ghostly sentinels are obtained by specially-crafted ``ilist_traits`` which
superpose the sentinel with the ``ilist`` instance in memory. Pointer
arithmetic is used to obtain the sentinel, which is relative to the ``ilist``'s
``this`` pointer. The ``ilist`` is augmented by an extra pointer, which serves
as the back-link of the sentinel. This is the only field in the ghostly
sentinel which can be legally accessed.
.. _dss_other:
Other Sequential Container options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Other STL containers are available, such as ``std::string``.
There are also various STL adapter classes such as ``std::queue``,
``std::priority_queue``, ``std::stack``, etc. These provide simplified access
to an underlying container but don't affect the cost of the container itself.
.. _ds_string:
String-like containers
----------------------
There are a variety of ways to pass around and use strings in C and C++, and
LLVM adds a few new options to choose from. Pick the first option on this list
that will do what you need, they are ordered according to their relative cost.
Note that it is generally preferred to *not* pass strings around as ``const
char*``'s. These have a number of problems, including the fact that they
cannot represent embedded nul ("\0") characters, and do not have a length
available efficiently. The general replacement for '``const char*``' is
StringRef.
For more information on choosing string containers for APIs, please see
:ref:`Passing Strings `.
.. _dss_stringref:
llvm/ADT/StringRef.h
^^^^^^^^^^^^^^^^^^^^
The StringRef class is a simple value class that contains a pointer to a
character and a length, and is quite related to the :ref:`ArrayRef
` class (but specialized for arrays of characters). Because
StringRef carries a length with it, it safely handles strings with embedded nul
characters in it, getting the length does not require a strlen call, and it even
has very convenient APIs for slicing and dicing the character range that it
represents.
StringRef is ideal for passing simple strings around that are known to be live,
either because they are C string literals, std::string, a C array, or a
SmallVector. Each of these cases has an efficient implicit conversion to
StringRef, which doesn't result in a dynamic strlen being executed.
StringRef has a few major limitations which make more powerful string containers
useful:
#. You cannot directly convert a StringRef to a 'const char*' because there is
no way to add a trailing nul (unlike the .c_str() method on various stronger
classes).
#. StringRef doesn't own or keep alive the underlying string bytes.
As such it can easily lead to dangling pointers, and is not suitable for
embedding in datastructures in most cases (instead, use an std::string or
something like that).
#. For the same reason, StringRef cannot be used as the return value of a
method if the method "computes" the result string. Instead, use std::string.
#. StringRef's do not allow you to mutate the pointed-to string bytes and it
doesn't allow you to insert or remove bytes from the range. For editing
operations like this, it interoperates with the :ref:`Twine `
class.
Because of its strengths and limitations, it is very common for a function to
take a StringRef and for a method on an object to return a StringRef that points
into some string that it owns.
.. _dss_twine:
llvm/ADT/Twine.h
^^^^^^^^^^^^^^^^
The Twine class is used as an intermediary datatype for APIs that want to take a
string that can be constructed inline with a series of concatenations. Twine
works by forming recursive instances of the Twine datatype (a simple value
object) on the stack as temporary objects, linking them together into a tree
which is then linearized when the Twine is consumed. Twine is only safe to use
as the argument to a function, and should always be a const reference, e.g.:
.. code-block:: c++
void foo(const Twine &T);
...
StringRef X = ...
unsigned i = ...
foo(X + "." + Twine(i));
This example forms a string like "blarg.42" by concatenating the values
together, and does not form intermediate strings containing "blarg" or "blarg.".
Because Twine is constructed with temporary objects on the stack, and because
these instances are destroyed at the end of the current statement, it is an
inherently dangerous API. For example, this simple variant contains undefined
behavior and will probably crash:
.. code-block:: c++
void foo(const Twine &T);
...
StringRef X = ...
unsigned i = ...
const Twine &Tmp = X + "." + Twine(i);
foo(Tmp);
... because the temporaries are destroyed before the call. That said, Twine's
are much more efficient than intermediate std::string temporaries, and they work
really well with StringRef. Just be aware of their limitations.
.. _dss_smallstring:
llvm/ADT/SmallString.h
^^^^^^^^^^^^^^^^^^^^^^
SmallString is a subclass of :ref:`SmallVector ` that adds some
convenience APIs like += that takes StringRef's. SmallString avoids allocating
memory in the case when the preallocated space is enough to hold its data, and
it calls back to general heap allocation when required. Since it owns its data,
it is very safe to use and supports full mutation of the string.
Like SmallVector's, the big downside to SmallString is their sizeof. While they
are optimized for small strings, they themselves are not particularly small.
This means that they work great for temporary scratch buffers on the stack, but
should not generally be put into the heap: it is very rare to see a SmallString
as the member of a frequently-allocated heap data structure or returned
by-value.
.. _dss_stdstring:
std::string
^^^^^^^^^^^
The standard C++ std::string class is a very general class that (like
SmallString) owns its underlying data. sizeof(std::string) is very reasonable
so it can be embedded into heap data structures and returned by-value. On the
other hand, std::string is highly inefficient for inline editing (e.g.
concatenating a bunch of stuff together) and because it is provided by the
standard library, its performance characteristics depend a lot of the host
standard library (e.g. libc++ and MSVC provide a highly optimized string class,
GCC contains a really slow implementation).
The major disadvantage of std::string is that almost every operation that makes
them larger can allocate memory, which is slow. As such, it is better to use
SmallVector or Twine as a scratch buffer, but then use std::string to persist
the result.
.. _ds_set:
Set-Like Containers (std::set, SmallSet, SetVector, etc)
--------------------------------------------------------
Set-like containers are useful when you need to canonicalize multiple values
into a single representation. There are several different choices for how to do
this, providing various trade-offs.
.. _dss_sortedvectorset:
A sorted 'vector'
^^^^^^^^^^^^^^^^^
If you intend to insert a lot of elements, then do a lot of queries, a great
approach is to use an std::vector (or other sequential container) with
std::sort+std::unique to remove duplicates. This approach works really well if
your usage pattern has these two distinct phases (insert then query), and can be
coupled with a good choice of :ref:`sequential container `.
This combination provides the several nice properties: the result data is
contiguous in memory (good for cache locality), has few allocations, is easy to
address (iterators in the final vector are just indices or pointers), and can be
efficiently queried with a standard binary search (e.g.
``std::lower_bound``; if you want the whole range of elements comparing
equal, use ``std::equal_range``).
.. _dss_smallset:
llvm/ADT/SmallSet.h
^^^^^^^^^^^^^^^^^^^
If you have a set-like data structure that is usually small and whose elements
are reasonably small, a ``SmallSet`` is a good choice. This set has
space for N elements in place (thus, if the set is dynamically smaller than N,
no malloc traffic is required) and accesses them with a simple linear search.
When the set grows beyond N elements, it allocates a more expensive
representation that guarantees efficient access (for most types, it falls back
to :ref:`std::set `, but for pointers it uses something far better,
:ref:`SmallPtrSet `.
The magic of this class is that it handles small sets extremely efficiently, but
gracefully handles extremely large sets without loss of efficiency.
.. _dss_smallptrset:
llvm/ADT/SmallPtrSet.h
^^^^^^^^^^^^^^^^^^^^^^
``SmallPtrSet`` has all the advantages of ``SmallSet`` (and a ``SmallSet`` of
pointers is transparently implemented with a ``SmallPtrSet``). If more than N
insertions are performed, a single quadratically probed hash table is allocated
and grows as needed, providing extremely efficient access (constant time
insertion/deleting/queries with low constant factors) and is very stingy with
malloc traffic.
Note that, unlike :ref:`std::set `, the iterators of ``SmallPtrSet``
are invalidated whenever an insertion occurs. Also, the values visited by the
iterators are not visited in sorted order.
.. _dss_stringset:
llvm/ADT/StringSet.h
^^^^^^^^^^^^^^^^^^^^
``StringSet`` is a thin wrapper around :ref:`StringMap\ `,
and it allows efficient storage and retrieval of unique strings.
Functionally analogous to ``SmallSet``, ``StringSet`` also supports
iteration. (The iterator dereferences to a ``StringMapEntry``, so you
need to call ``i->getKey()`` to access the item of the StringSet.) On the
other hand, ``StringSet`` doesn't support range-insertion and
copy-construction, which :ref:`SmallSet ` and :ref:`SmallPtrSet
` do support.
.. _dss_denseset:
llvm/ADT/DenseSet.h
^^^^^^^^^^^^^^^^^^^
DenseSet is a simple quadratically probed hash table. It excels at supporting
small values: it uses a single allocation to hold all of the pairs that are
currently inserted in the set. DenseSet is a great way to unique small values
that are not simple pointers (use :ref:`SmallPtrSet ` for
pointers). Note that DenseSet has the same requirements for the value type that
:ref:`DenseMap ` has.
.. _dss_sparseset:
llvm/ADT/SparseSet.h
^^^^^^^^^^^^^^^^^^^^
SparseSet holds a small number of objects identified by unsigned keys of
moderate size. It uses a lot of memory, but provides operations that are almost
as fast as a vector. Typical keys are physical registers, virtual registers, or
numbered basic blocks.
SparseSet is useful for algorithms that need very fast clear/find/insert/erase
and fast iteration over small sets. It is not intended for building composite
data structures.
.. _dss_sparsemultiset:
llvm/ADT/SparseMultiSet.h
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SparseMultiSet adds multiset behavior to SparseSet, while retaining SparseSet's
desirable attributes. Like SparseSet, it typically uses a lot of memory, but
provides operations that are almost as fast as a vector. Typical keys are
physical registers, virtual registers, or numbered basic blocks.
SparseMultiSet is useful for algorithms that need very fast
clear/find/insert/erase of the entire collection, and iteration over sets of
elements sharing a key. It is often a more efficient choice than using composite
data structures (e.g. vector-of-vectors, map-of-vectors). It is not intended for
building composite data structures.
.. _dss_FoldingSet:
llvm/ADT/FoldingSet.h
^^^^^^^^^^^^^^^^^^^^^
FoldingSet is an aggregate class that is really good at uniquing
expensive-to-create or polymorphic objects. It is a combination of a chained
hash table with intrusive links (uniqued objects are required to inherit from
FoldingSetNode) that uses :ref:`SmallVector ` as part of its ID
process.
Consider a case where you want to implement a "getOrCreateFoo" method for a
complex object (for example, a node in the code generator). The client has a
description of **what** it wants to generate (it knows the opcode and all the
operands), but we don't want to 'new' a node, then try inserting it into a set
only to find out it already exists, at which point we would have to delete it
and return the node that already exists.
To support this style of client, FoldingSet perform a query with a
FoldingSetNodeID (which wraps SmallVector) that can be used to describe the
element that we want to query for. The query either returns the element
matching the ID or it returns an opaque ID that indicates where insertion should
take place. Construction of the ID usually does not require heap traffic.
Because FoldingSet uses intrusive links, it can support polymorphic objects in
the set (for example, you can have SDNode instances mixed with LoadSDNodes).
Because the elements are individually allocated, pointers to the elements are
stable: inserting or removing elements does not invalidate any pointers to other
elements.
.. _dss_set:
^^^^^
``std::set`` is a reasonable all-around set class, which is decent at many
things but great at nothing. std::set allocates memory for each element
inserted (thus it is very malloc intensive) and typically stores three pointers
per element in the set (thus adding a large amount of per-element space
overhead). It offers guaranteed log(n) performance, which is not particularly
fast from a complexity standpoint (particularly if the elements of the set are
expensive to compare, like strings), and has extremely high constant factors for
lookup, insertion and removal.
The advantages of std::set are that its iterators are stable (deleting or
inserting an element from the set does not affect iterators or pointers to other
elements) and that iteration over the set is guaranteed to be in sorted order.
If the elements in the set are large, then the relative overhead of the pointers
and malloc traffic is not a big deal, but if the elements of the set are small,
std::set is almost never a good choice.
.. _dss_setvector:
llvm/ADT/SetVector.h
^^^^^^^^^^^^^^^^^^^^
LLVM's ``SetVector`` is an adapter class that combines your choice of a
set-like container along with a :ref:`Sequential Container ` The
important property that this provides is efficient insertion with uniquing
(duplicate elements are ignored) with iteration support. It implements this by
inserting elements into both a set-like container and the sequential container,
using the set-like container for uniquing and the sequential container for
iteration.
The difference between SetVector and other sets is that the order of iteration
is guaranteed to match the order of insertion into the SetVector. This property
is really important for things like sets of pointers. Because pointer values
are non-deterministic (e.g. vary across runs of the program on different
machines), iterating over the pointers in the set will not be in a well-defined
order.
The drawback of SetVector is that it requires twice as much space as a normal
set and has the sum of constant factors from the set-like container and the
sequential container that it uses. Use it **only** if you need to iterate over
the elements in a deterministic order. SetVector is also expensive to delete
elements out of (linear time), unless you use its "pop_back" method, which is
faster.
``SetVector`` is an adapter class that defaults to using ``std::vector`` and a
size 16 ``SmallSet`` for the underlying containers, so it is quite expensive.
However, ``"llvm/ADT/SetVector.h"`` also provides a ``SmallSetVector`` class,
which defaults to using a ``SmallVector`` and ``SmallSet`` of a specified size.
If you use this, and if your sets are dynamically smaller than ``N``, you will
save a lot of heap traffic.
.. _dss_uniquevector:
llvm/ADT/UniqueVector.h
^^^^^^^^^^^^^^^^^^^^^^^
UniqueVector is similar to :ref:`SetVector ` but it retains a
unique ID for each element inserted into the set. It internally contains a map
and a vector, and it assigns a unique ID for each value inserted into the set.
UniqueVector is very expensive: its cost is the sum of the cost of maintaining
both the map and vector, it has high complexity, high constant factors, and
produces a lot of malloc traffic. It should be avoided.
.. _dss_immutableset:
llvm/ADT/ImmutableSet.h
^^^^^^^^^^^^^^^^^^^^^^^
ImmutableSet is an immutable (functional) set implementation based on an AVL
tree. Adding or removing elements is done through a Factory object and results
in the creation of a new ImmutableSet object. If an ImmutableSet already exists
with the given contents, then the existing one is returned; equality is compared
with a FoldingSetNodeID. The time and space complexity of add or remove
operations is logarithmic in the size of the original set.
There is no method for returning an element of the set, you can only check for
membership.
.. _dss_otherset:
Other Set-Like Container Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The STL provides several other options, such as std::multiset and the various
"hash_set" like containers (whether from C++ TR1 or from the SGI library). We
never use hash_set and unordered_set because they are generally very expensive
(each insertion requires a malloc) and very non-portable.
std::multiset is useful if you're not interested in elimination of duplicates,
but has all the drawbacks of :ref:`std::set `. A sorted vector
(where you don't delete duplicate entries) or some other approach is almost
always better.
.. _ds_map:
Map-Like Containers (std::map, DenseMap, etc)
---------------------------------------------
Map-like containers are useful when you want to associate data to a key. As
usual, there are a lot of different ways to do this. :)
.. _dss_sortedvectormap:
A sorted 'vector'
^^^^^^^^^^^^^^^^^
If your usage pattern follows a strict insert-then-query approach, you can
trivially use the same approach as :ref:`sorted vectors for set-like containers
`. The only difference is that your query function (which
uses std::lower_bound to get efficient log(n) lookup) should only compare the
key, not both the key and value. This yields the same advantages as sorted
vectors for sets.
.. _dss_stringmap:
llvm/ADT/StringMap.h
^^^^^^^^^^^^^^^^^^^^
Strings are commonly used as keys in maps, and they are difficult to support
efficiently: they are variable length, inefficient to hash and compare when
long, expensive to copy, etc. StringMap is a specialized container designed to
cope with these issues. It supports mapping an arbitrary range of bytes to an
arbitrary other object.
The StringMap implementation uses a quadratically-probed hash table, where the
buckets store a pointer to the heap allocated entries (and some other stuff).
The entries in the map must be heap allocated because the strings are variable
length. The string data (key) and the element object (value) are stored in the
same allocation with the string data immediately after the element object.
This container guarantees the "``(char*)(&Value+1)``" points to the key string
for a value.
The StringMap is very fast for several reasons: quadratic probing is very cache
efficient for lookups, the hash value of strings in buckets is not recomputed
when looking up an element, StringMap rarely has to touch the memory for
unrelated objects when looking up a value (even when hash collisions happen),
hash table growth does not recompute the hash values for strings already in the
table, and each pair in the map is store in a single allocation (the string data
is stored in the same allocation as the Value of a pair).
StringMap also provides query methods that take byte ranges, so it only ever
copies a string if a value is inserted into the table.
StringMap iteration order, however, is not guaranteed to be deterministic, so
any uses which require that should instead use a std::map.
.. _dss_indexmap:
llvm/ADT/IndexedMap.h
^^^^^^^^^^^^^^^^^^^^^
IndexedMap is a specialized container for mapping small dense integers (or
values that can be mapped to small dense integers) to some other type. It is
internally implemented as a vector with a mapping function that maps the keys
to the dense integer range.
This is useful for cases like virtual registers in the LLVM code generator: they
have a dense mapping that is offset by a compile-time constant (the first
virtual register ID).
.. _dss_densemap:
llvm/ADT/DenseMap.h
^^^^^^^^^^^^^^^^^^^
DenseMap is a simple quadratically probed hash table. It excels at supporting
small keys and values: it uses a single allocation to hold all of the pairs
that are currently inserted in the map. DenseMap is a great way to map
pointers to pointers, or map other small types to each other.
There are several aspects of DenseMap that you should be aware of, however.
The iterators in a DenseMap are invalidated whenever an insertion occurs,
unlike map. Also, because DenseMap allocates space for a large number of
key/value pairs (it starts with 64 by default), it will waste a lot of space if
your keys or values are large. Finally, you must implement a partial
specialization of DenseMapInfo for the key that you want, if it isn't already
supported. This is required to tell DenseMap about two special marker values
(which can never be inserted into the map) that it needs internally.
DenseMap's find_as() method supports lookup operations using an alternate key
type. This is useful in cases where the normal key type is expensive to
construct, but cheap to compare against. The DenseMapInfo is responsible for
defining the appropriate comparison and hashing methods for each alternate key
type used.
.. _dss_valuemap:
llvm/IR/ValueMap.h
^^^^^^^^^^^^^^^^^^^
ValueMap is a wrapper around a :ref:`DenseMap ` mapping
``Value*``\ s (or subclasses) to another type. When a Value is deleted or
RAUW'ed, ValueMap will update itself so the new version of the key is mapped to
the same value, just as if the key were a WeakVH. You can configure exactly how
this happens, and what else happens on these two events, by passing a ``Config``
parameter to the ValueMap template.
.. _dss_intervalmap:
llvm/ADT/IntervalMap.h
^^^^^^^^^^^^^^^^^^^^^^
IntervalMap is a compact map for small keys and values. It maps key intervals
instead of single keys, and it will automatically coalesce adjacent intervals.
When the map only contains a few intervals, they are stored in the map object
itself to avoid allocations.
The IntervalMap iterators are quite big, so they should not be passed around as
STL iterators. The heavyweight iterators allow a smaller data structure.
.. _dss_map: