Overview

This is just a note to better understand the V8 Turbofan. This note heavily relies on other blogpost which are mentioned in the References section.

IR

An intermediate representation is a representation of a program “between” the source and target languages. A good IR is one that is fairly independent of the source and target languages, so that it maximizes its ability to be used in a retargetable compiler. Ignition collects the profiling information or feedback about the inputs to certain operations during execution. Some of this feedback is used by ignition itself to speed up subsequent interpretation of the bytecode.

literal objects => strings, numbers, object-literal boilerplate, etc

How V8 works

V8 takes javascript code and passes it to the parser, then the parser creates an Abstract Syntax Tree (AST) representation for the source code. Then the AST is feeded into BytecodeGenerator which is part of the Ignition Interpreter, where it is turned into a stream of bytecodes. This stream of bytecodes is then executed by Ignition.

V8 Notes / Notes For Issue 1016450

----- Flags -----
--allow-natives-syntax  => enable runtime functions
--trace-turbo   => generates .cfg and .json to get better graph view of different optimization passes using turbolizer
--trace-opt     => trace optimizations
--trace-deopt   => trace deoptimizations
--trace-turbo-reduction => traces of reduction
--print-ast     => print Abstract Syntax Tree, internally generated by the V8
--print-bytecode    => print bytecode
--print-opt-code    => prints optimized codes
--turbo-filter  => optimization filter for turbofan
--turbo-inlining    => enable inlining 
--trace-turbo-inlining  => trace turbofan inlining   
--turbo-verify  => very turbofan graphs at each phase
--turbo-types   => use typed lowering in turbofan
--turbo-asm     => enable asm.js
--trubo-stats   => print turbofan statistics
----- Functions -----
%DebugPrint(x)  => Print all internal information about the object or primitive value
%SystemBreak()  => trigger debugging interrupt / set breakpoint in JS
%DisassembleFunction()  => Disassemble the function
%OptimizeFunctionOnNextCall() => trigger optimization of the function in V8
----- General Terms -----
literal objects => strings, numbers, object-literal boilerplate, etc 
SignedSmall     => Small integer (Signed 32-bit or 31bit) and represented as SMI
Number          => regular number (SignedSmall etc.)
NumberOrOddball => includes all values from Number plus undefined, null, true and false

To see other runtime function look at src/runtime

Turbolizer

Tool used to debug Turbofan’s sea of nodes graph

cd tools/turbolizer
npm i
npm run-script build
python -m SimpleHTTPServer

Speculative Optimization

Sample code:

function add(x, y) {
  return x + y;
}
console.log(add(1, 2));
  1. optimizing compiler can only eliminate an expression if it knows for sure that this expression won’t cause any observable side effects and doesn’t raise exceptions
  2. In case of optimization, let’s see for only numbers, it put checks in place to check that values such as x and y are numbers, if either of these checks fails it go back to interpreting the bytecode instead (Deoptimization).
  3. The feedback collected by Ignition is stored in Feedback Vector. Feedback Vector data structure is linked from the closure and contains slots to store different kinds of feedback, i.e., bisets, closure, or hidden classes, depending on the concrete inline cache. The closure also links to the SharedFunctionInfo, which contains the general information about the functions such as source position, bytecode, strict/sloppy mode etc. There is a link to the context as well, which contains the values for the free variables of the functiohn and provides access to the global object (i.e., the <iframe> specific data structure)
  4. In case of the add function, Feedback Vector has one interesting slot i.e., BinaryOp slot, where binary operations like +, -, *, etc. can record feedback about the inputs and outputs that were seen so far.
  5. Feedback vector of a specific closure can be seen by using %DebugPrint() function.
  6. Feedback can only progress in latice, it’s impossible to go back. If it go back then it’ll risk entering deoptimization loop where optimizing compiler consumes feedback and bails out from optimized code, back to interpreter, whenever it sees values that don’t agree with the feedback.
  7. Checking if the value is Smi Representation and converting Smi to Word32
    # Check if the value is small integer
    movq rax, [rbp+0x18]
    test al, 0x1        // least significant bit should be 0 for Smi.
    jnz Deoptimize      // if(al != 0) jnz Deoptmize
    # convert from Smi to Word32
    movq rcx, rax
    shrq rcx, 32        // converting to 32-bit representaion by shifting value by 32 bit to right
    

Sea of Nodes

Turbofan works with Sea of Nodes and each nodes is indicates as part of graph. Nodes represents arithemetic operations, load, stores, calls, constants etc. Each node produces a value. Also each node points to its operand. For instance, 2+3=5. In this case the node points to 2 and 3.

// A Node is the basic primitive of graphs. Nodes are chained together by
// input/use chains but by default otherwise contain only an identifying number
// which specific applications of graphs and nodes can use to index auxiliary
// out-of-line data, especially transient data.
//
// In addition Nodes only contain a mutable Operator that may change during
// compilation, e.g. during lowering passes. Other information that needs to be
// associated with Nodes during compilation must be stored out-of-line indexed
// by the Node's id.

// NodeIds are identifying numbers for nodes that can be used to index auxiliary
// out-of-line data associated with each node.

Three types of edges are:

  1. Control edges Control edges enables branches and loops.
  2. Value edges Value edges shows the value dependencies.
  3. Effect edges Effect edges order operations such as reading or writing states. For instance:
    ob.a = ob.a + 1
    

    In this example, before writing to property a we need to read the property a first. As a result, an effect edge exists between the load and the store. And also we have to increement the value of property a beforing storing it. On that account, there is a requirement for an effect edge between the load and the addition. So the effect edge make sure that load->add->store are organized in this sequence. Effect edge will only be shown in the graph if there is an operation that changes the state of the variable or the object in that program. Another use of effect is, if the X node has an output effect to another node, the other node is informed that it cannot do anything until the X node has completed its work.

Different V8 phase

Graph builder phase

Graph builder is not as optimization phase but has the largest number of nodes. This is the first generate graph we can view this by selecting bytecode graph builder option in Turbolizer. Using the AST, it makes a graph of JavaScript nodes such as JSAdd, JSCallFunction, JSLoadProperty, IfTrue, IfFalse, etc.

Typer phase

After the Graph builder finished the graph creation then the optimization phases start. The early optimizatiohn phase is TyperPhase and is run by OptimizeGraph.

// src/compiler/pipeline.cc
bool PipelineImpl::OptimizeGraph(Linkage* linkage) {
  PipelineData* data = this->data_;
  data->BeginPhaseKind("V8.TFLowering");
  // Type the graph and keep the Typer running such that new nodes get
  // automatically typed when they are created.
  Run<TyperPhase>(data->CreateTyper());
  // ...
// src/compiler/pipeline.cc
struct TyperPhase {
  DECL_PIPELINE_PHASE_CONSTANTS(Typer)

  void Run(PipelineData* data, Zone* temp_zone, Typer* typer) {
    NodeVector roots(temp_zone);
    data->jsgraph()->GetCachedNodes(&roots);

    // Make sure we always type True and False. Needed for escape analysis.
    roots.push_back(data->jsgraph()->TrueConstant());
    roots.push_back(data->jsgraph()->FalseConstant());

    LoopVariableOptimizer induction_vars(data->jsgraph()->graph(),
                                         data->common(), temp_zone);
    if (FLAG_turbo_loop_variable) induction_vars.Run();

    // The typer inspects heap objects, so we need to unpark the local heap.
    UnparkedScopeIfNeeded scope(data->broker());
    typer->Run(roots, &induction_vars);
  }
};

As we can see in the below code, when the typer runs it visits every node of the graph and then each node is passed inside the graph_reducer.ReduceNode() function to reduce them. There will be the seperate post for the GraphReducer.

// src/compiler/typer.cc
void Typer::Run(const NodeVector& roots,
                LoopVariableOptimizer* induction_vars) {
  // An induction variable is a variable whose value is derived from the loop iteration 
  // variable's value 
  // or in simple form,
  // variable that gets increased or decreased by a fixed amount on every iteration of a loop
  // It is often variable i in for loop

  if (induction_vars != nullptr) {
    induction_vars->ChangeToInductionVariablePhis();
  }
  Visitor visitor(this, induction_vars);
  GraphReducer graph_reducer(zone(), graph(), tick_counter_, broker());
  graph_reducer.AddReducer(&visitor);
  for (Node* const root : roots) graph_reducer.ReduceNode(root);
  graph_reducer.ReduceGraph();
  // ...
}
class Typer::Visitor : public Reducer {
 public:
  //  ...
  Reduction Reduce(Node* node) override {
    if (node->op()->ValueOutputCount() == 0) return NoChange();
    return UpdateType(node, TypeNode(node));
  }
  //  ...

For instance, suppose the turbofan is optimised to compile and execute, in which case on every JSCall node visit, TyperPhase will call JSCallTyper, and so on. In the JSCallTyper we can see the switch state with large number of cases and most of builtin function is assigned with the associated Type which will be used to identify it. For instance, if the function call is builtin MathRandom the expected return type would be Type::PlainNumber.

// src/compiler/typer.cc
Type Typer::Visitor::JSCallTyper(Type fun, Typer* t) {
  if (!fun.IsHeapConstant() || !fun.AsHeapConstant()->Ref().IsJSFunction()) {
    return Type::NonInternal();
  }
  JSFunctionRef function = fun.AsHeapConstant()->Ref().AsJSFunction();
  if (!function.serialized()) {
    TRACE_BROKER_MISSING(t->broker(), "data for function " << function);
    return Type::NonInternal();
  }
  if (!function.shared().HasBuiltinId()) {
    return Type::NonInternal();
  }
  switch (function.shared().builtin_id()) {
    case Builtins::kMathRandom:
      return Type::PlainNumber();
    //  ...

For the node NumberConstant in most cases the the type will be Range.

Type Typer::Visitor::TypeNumberConstant(Node* node) {
  double number = OpParameter<double>(node->op());
  return Type::Constant(number, zone());
}

Type Type::Constant(double value, Zone* zone) {
  if (RangeType::IsInteger(value)) {
    return Range(value, value, zone);
  } else if (IsMinusZero(value)) {
    return Type::MinusZero();
  } else if (std::isnan(value)) {
    return Type::NaN();
  }

  DCHECK(OtherNumberConstantType::IsOtherNumberConstant(value));
  return OtherNumberConstant(value, zone);
}

Type Lowering

This phase comes right after the typerphase in OptimizeGraph.

bool PipelineImpl::OptimizeGraph(Linkage* linkage) {
  PipelineData* data = this->data_;
  data->BeginPhaseKind("V8.TFLowering");
  Run<TyperPhase>(data->CreateTyper());
  RunPrintAndVerify(TyperPhase::phase_name());
  Run<TypedLoweringPhase>();
  RunPrintAndVerify(TypedLoweringPhase::phase_name());

There are more reducer in this Phase.

struct TypedLoweringPhase {
  DECL_PIPELINE_PHASE_CONSTANTS(TypedLowering)

  void Run(PipelineData* data, Zone* temp_zone) {
    // ...
    AddReducer(data, &graph_reducer, &dead_code_elimination);

    if (!data->info()->IsNativeContextIndependent()) {
      AddReducer(data, &graph_reducer, &create_lowering);
    }
    AddReducer(data, &graph_reducer, &constant_folding_reducer);
    AddReducer(data, &graph_reducer, &typed_lowering);
    AddReducer(data, &graph_reducer, &typed_optimization);
    AddReducer(data, &graph_reducer, &simple_reducer);
    AddReducer(data, &graph_reducer, &checkpoint_elimination);
    AddReducer(data, &graph_reducer, &common_reducer);
    // ...
  }
};

// AddReducer Function

void AddReducer(PipelineData* data, GraphReducer* graph_reducer,
                Reducer* reducer) {
  if (data->info()->source_positions()) {
    SourcePositionWrapper* const wrapper =
        data->graph_zone()->New<SourcePositionWrapper>(
            reducer, data->source_positions());
    reducer = wrapper;
  }
  if (data->info()->trace_turbo_json()) {
    NodeOriginsWrapper* const wrapper =
        data->graph_zone()->New<NodeOriginsWrapper>(reducer,
                                                    data->node_origins());
    reducer = wrapper;
  }

  graph_reducer->AddReducer(reducer);
}

For feneral understanding purpose we’ll inspect the TypedOptimization reducer. We’ll look at code inside TypedOptimization::Reduce. We can see there is a switch statement with huge number of cases, for instance, if the visited node’s opcode() is kSpeculativeNumberAdd it’ll call ReduceSpeculativeNumberAdd(node) function.

Reduction TypedOptimization::Reduce(Node* node) {
  DisallowHeapAccessIf no_heap_access(!FLAG_turbo_direct_heap_access);
  switch (node->opcode()) {
    case IrOpcode::kConvertReceiver:
      return ReduceConvertReceiver(node);
    case IrOpcode::kMaybeGrowFastElements:
      return ReduceMaybeGrowFastElements(node);
    case IrOpcode::kCheckHeapObject:
      return ReduceCheckHeapObject(node);
    // ...
    case IrOpcode::kSpeculativeNumberAdd:
      return ReduceSpeculativeNumberAdd(node);
    // ...
    default:
      break;
  }
  return NoChange();
}
// ReduceSpeculativeNumberAdd Function
Reduction TypedOptimization::ReduceSpeculativeNumberAdd(Node* node) {
  Node* const lhs = NodeProperties::GetValueInput(node, 0);
  Node* const rhs = NodeProperties::GetValueInput(node, 1);
  Type const lhs_type = NodeProperties::GetType(lhs);
  Type const rhs_type = NodeProperties::GetType(rhs);
  NumberOperationHint hint = NumberOperationHintOf(node->op());
  if ((hint == NumberOperationHint::kNumber ||
       hint == NumberOperationHint::kNumberOrOddball) &&
      BothAre(lhs_type, rhs_type, Type::PlainPrimitive()) &&
      NeitherCanBe(lhs_type, rhs_type, Type::StringOrReceiver())) {
    // SpeculativeNumberAdd(x:-string, y:-string) =>
    //     NumberAdd(ToNumber(x), ToNumber(y))
    Node* const toNum_lhs = ConvertPlainPrimitiveToNumber(lhs);
    Node* const toNum_rhs = ConvertPlainPrimitiveToNumber(rhs);
    Node* const value =
        graph()->NewNode(simplified()->NumberAdd(), toNum_lhs, toNum_rhs);
    ReplaceWithValue(node, value);
    return Replace(value);
  }
  return NoChange();
}

In ReduceSpeculativeNumberAdd function if both lhs and rhs Node has hint of NumberOperationHint::kNumber and both the lhs_type and rhs_type are Type::PlainPrimitive the SpeculativeNumberAdd function will be replaced by the NumberAdd.

Also, in JSTypedLowering::ReduceJSCall when JSTypedLowering reducer visits the JSCall node, turbo fan simply creates a LoadField node and replace opcode of the JSCall node to Call opcode. ChangeOp means change opcode.

Reduction JSTypedLowering::ReduceJSCall(Node* node) {
    // [...]
    // Load the context from the {target}.
    Node* context = effect = graph()->NewNode(
        simplified()->LoadField(AccessBuilder::ForJSFunctionContext()), target,
        effect, control);
    NodeProperties::ReplaceContextInput(node, context);

    // Update the effect dependency for the {node}.
    NodeProperties::ReplaceEffectInput(node, effect);

    // [...]
    } else if (shared->HasBuiltinId()) {
      DCHECK(Builtins::HasJSLinkage(shared->builtin_id()));
      // Patch {node} to a direct code object call.
      Callable callable = Builtins::CallableFor(
          isolate(), static_cast<Builtins::Name>(shared->builtin_id()));
      CallDescriptor::Flags flags = CallDescriptor::kNeedsFrameState;

      const CallInterfaceDescriptor& descriptor = callable.descriptor();
      auto call_descriptor = Linkage::GetStubCallDescriptor(
          graph()->zone(), descriptor, 1 + arity, flags);
      Node* stub_code = jsgraph()->HeapConstant(callable.code());
      node->RemoveInput(n.FeedbackVectorIndex());
      node->InsertInput(graph()->zone(), 0, stub_code);  // Code object.
      node->InsertInput(graph()->zone(), 2, new_target);
      node->InsertInput(graph()->zone(), 3, jsgraph()->Constant(arity));
      NodeProperties::ChangeOp(node, common()->Call(call_descriptor));
    }
    // [...]
    return Changed(node);
  }

Range Types

Sample code from doar-e.github.io

function opt_me(b) {
  let x = 10; // [1] x0 = 10
  if (b == "foo")
    x = 5; // [2] x1 = 5
  
  let y = x + 2; // [3] x2 = phi(x0, x1)
  y = y + 1000; 
  y = y * 2;
  return y;
}

SSA is a property of IR (intermediate representation).

In the above sample code, when the opt_me function is called initially x is set to 10. If the parameter b is equal to foo then the value of x will be set to 5. So depending on the if statement x value wiil be either 10 or 5. However, in SSA (Static Single Assignment) each variable must me assigned exactly once. So for each value 10 and 5, x0 and x1 will be created respectively. As a result phi function is needed at line no [3] because x must be either x0 or x1. At line no [3], x2 = phi(x0,x1) statement is commented. This statement tells that the x2 can take only one value either x0 or x1. In addition, the type of the constant value 10 (x0) is Range(10,10) and the range of the constant value 5 (X1) is Range(5,5) so the type of the phi is union of thow ranges i.e., Range(5,10).

Let’s inspect this in much details by looking at the code.

Type Typer::Visitor::TypePhi(Node* node) {
  // arity is number of argument or operand taken by the function or operation
  int arity = node->op()->ValueInputCount();
  Type type = Operand(node, 0);
  for (int i = 1; i < arity; ++i) {
    type = Type::Union(type, Operand(node, i), zone());
  }
  return type;
}

In the above code, it just unions the Type of Node with all of the Types of Operand and returns the result.

Lets Inspect the typing of the SpeculativeSafeIntegerAdd nodes. This is implemented in OperationTyper.

Type OperationTyper::SpeculativeSafeIntegerAdd(Type lhs, Type rhs) {
  Type result = SpeculativeNumberAdd(lhs, rhs);

  // If we have a Smi or Int32 feedback, the representation selection will
  // either truncate or it will check the inputs (i.e., deopt if not int32).
  // In either case the result will be in the safe integer range, so we
  // can bake in the type here. This needs to be in sync with
  // SimplifiedLowering::VisitSpeculativeAdditiveOp.
  return Type::Intersect(result, cache_->kSafeIntegerOrMinusZero, zone());
}
// In case of NumberAdd, return Name(lhs,rhs) turns into
// NumberAdd(lhs, rhs)
#define SPECULATIVE_NUMBER_BINOP(Name)                         \
  Type OperationTyper::Speculative##Name(Type lhs, Type rhs) { \
    lhs = SpeculativeToNumber(lhs);                            \
    rhs = SpeculativeToNumber(rhs);                            \
    return Name(lhs, rhs);                                     \
  }

  SPECULATIVE_NUMBER_BINOP(NumberAdd)
// src/compiler/operation-typer.cc
// Actual Number Add function
Type OperationTyper::NumberAdd(Type lhs, Type rhs) {
  // [...]
  // We can give more precise types for integers.
  Type type = Type::None();
  lhs = Type::Intersect(lhs, Type::PlainNumber(), zone());
  rhs = Type::Intersect(rhs, Type::PlainNumber(), zone());
  if (!lhs.IsNone() && !rhs.IsNone()) {
    if (lhs.Is(cache_->kInteger) && rhs.Is(cache_->kInteger)) {
      type = AddRanger(lhs.Min(), lhs.Max(), rhs.Min(), rhs.Max());
    } 
  // [...]
  // Take into account the -0 and NaN information computed earlier.
  if (maybe_minuszero) type = Type::Union(type, Type::MinusZero(), zone());
  if (maybe_nan) type = Type::Union(type, Type::NaN(), zone());
  return type;
}

AddRanger function calculates the min and max bound of Range.

Type OperationTyper::AddRanger(double lhs_min, double lhs_max, double rhs_min,
                               double rhs_max) {
  // lhs = (5,10)
  // rhs = (2,3)
  // lhs_min = 5, lhs_max = 10
  // rhs_min = 2, rhs_max = 3
  double results[4];
  results[0] = lhs_min + rhs_min; // 5 + 2 = 7
  results[1] = lhs_min + rhs_max; // 5 + 3 = 8
  results[2] = lhs_max + rhs_min; // 10 + 2 = 12
  results[3] = lhs_max + rhs_max; // 10 + 3 = 13
  // Since none of the inputs can be -0, the result cannot be -0 either.
  // However, it can be nan (the sum of two infinities of opposite sign).
  // On the other hand, if none of the "results" above is nan, then the
  // actual result cannot be nan either.
  int nans = 0;
  for (int i = 0; i < 4; ++i) {
    if (std::isnan(results[i])) ++nans;
  }
  if (nans == 4) return Type::NaN();
  // array_min = 7, array_max = 13
  // Range(7, 13)
  Type type = Type::Range(array_min(results, 4), array_max(results, 4), zone());
  if (nans > 0) type = Type::Union(type, Type::NaN(), zone());
  // Examples:
  //   [-inf, -inf] + [+inf, +inf] = NaN
  //   [-inf, -inf] + [n, +inf] = [-inf, -inf] \/ NaN
  //   [-inf, +inf] + [n, +inf] = [-inf, +inf] \/ NaN
  //   [-inf, m] + [n, +inf] = [-inf, +inf] \/ NaN
  return type;
}

CheckBounds nodes

  1. To prevent array from accessing index out of range checkbound is added.
  2. CheckBounds simply compares the input edge 0 (index) and input edge 1 (length), and make sure that the index is less than length.

Simplified Lowering

In Simplified lowering while visiting the node (VisitNode), there is a switch statement with huge number of cases. In case of CheckBounds if the opcode of the node is IrOpcode::kCheckBounds then it’ll call the function VisitCheckBounds.

// Dispatching routine for visiting the node {node} with the usage {use}.
  // Depending on the operator, propagate new usage info to the inputs.
  template <Phase T>
  void VisitNode(Node* node, Truncation truncation,
                 SimplifiedLowering* lowering) {
    tick_counter_->TickAndMaybeEnterSafepoint();

    // Unconditionally eliminate unused pure nodes (only relevant if there's
    // a pure operation in between two effectful ones, where the last one
    // is unused).
    // Note: We must not do this for constants, as they are cached and we
    // would thus kill the cached {node} during lowering (i.e. replace all
    // uses with Dead), but at that point some node lowering might have
    // already taken the constant {node} from the cache (while it was not
    // yet killed) and we would afterwards replace that use with Dead as well.
    if (node->op()->ValueInputCount() > 0 &&
        node->op()->HasProperty(Operator::kPure) && truncation.IsUnused()) {
      return VisitUnused<T>(node);
    }

    if (lower<T>()) InsertUnreachableIfNecessary<T>(node);

    switch (node->opcode()) {
      //------------------------------------------------------------------
      // Common operators.
      //------------------------------------------------------------------
      case IrOpcode::kStart:
        // We use Start as a terminator for the frame state chain, so even
        // tho Start doesn't really produce a value, we have to say Tagged
        // here, otherwise the input conversion will fail.
        return VisitLeaf<T>(node, MachineRepresentation::kTagged);
      // [...]
      case IrOpcode::kCheckBounds:
        return VisitCheckBounds<T>(node, lowering);
      // [...]

As we can see in the VisitCheckBounds, index_type is type of input edge 0 and length_type is type of input edge 1. Also, if the minimum value of index_type is greater or equal to 0.0 and maximum of index_type is less than the minimum value of length_type the new_flags will set to CheckBoundsFlags::kAbortOnOutOfBounds. Which means the checkbound is not eliminated and the opcode of the node is changed into CheckedUint32Bounds. Search kCheckUint32Bounds for more details.

  template <Phase T>
  void VisitCheckBounds(Node* node, SimplifiedLowering* lowering) {
    CheckBoundsParameters const& p = CheckBoundsParametersOf(node->op());
    FeedbackSource const& feedback = p.check_parameters().feedback();
    Type const index_type = TypeOf(node->InputAt(0));
    Type const length_type = TypeOf(node->InputAt(1));

    // Conversions, if requested and needed, will be handled by the
    // representation changer, not by the lower-level Checked*Bounds operators.
    CheckBoundsFlags new_flags =
        p.flags().without(CheckBoundsFlag::kConvertStringAndMinusZero);

    if (length_type.Is(Type::Unsigned31())) {
      if (index_type.Is(Type::Integral32()) ||
          (index_type.Is(Type::Integral32OrMinusZero()) &&
           p.flags() & CheckBoundsFlag::kConvertStringAndMinusZero)) {
        // Map the values in the [-2^31,-1] range to the [2^31,2^32-1] range,
        // which will be considered out-of-bounds because the {length_type} is
        // limited to Unsigned31. This also converts -0 to 0.
        VisitBinop<T>(node, UseInfo::TruncatingWord32(),
                      MachineRepresentation::kWord32);
        if (lower<T>()) {
          if (lowering->poisoning_level_ ==
                  PoisoningMitigationLevel::kDontPoison &&
              (index_type.IsNone() || length_type.IsNone() ||
               (index_type.Min() >= 0.0 &&
                index_type.Max() < length_type.Min()))) {
            // The bounds check is redundant if we already know that
            // the index is within the bounds of [0.0, length[.
            // TODO(neis): Move this into TypedOptimization?
            new_flags |= CheckBoundsFlag::kAbortOnOutOfBounds;
          }
          ChangeOp(node,
                   simplified()->CheckedUint32Bounds(feedback, new_flags));
        }
      }
      // [...]
  }

In the EffectControlLinearizer::LowerCheckedUint32Bounds, it’ll lower CheckBounds to the Uint32LessThan. kDeoptOnOutOfBounds will determine whether or not to determine whether the overflow deopt; There is if else statement in the following code, if the params.flags() is not CheckBoundsFlag::kAbortOnOutOfBounds it’ll call DeoptimizeIfNot which basically deoptimizes except if the DeoptimizeReason is kOutOfBounds i.e., no deoptimization in case of out-of-bound. Instead it’ll reach to unrechable node.

// src/compiler/effect-control-linearizer.cc
Node* EffectControlLinearizer::LowerCheckedUint32Bounds(Node* node,
                                                        Node* frame_state) {
  Node* index = node->InputAt(0);
  Node* limit = node->InputAt(1);
  const CheckBoundsParameters& params = CheckBoundsParametersOf(node->op());

  Node* check = __ Uint32LessThan(index, limit);
  if (!(params.flags() & CheckBoundsFlag::kAbortOnOutOfBounds)) {
    __ DeoptimizeIfNot(DeoptimizeReason::kOutOfBounds,
                       params.check_parameters().feedback(), check, frame_state,
                       IsSafetyCheck::kCriticalSafetyCheck);
  } else {
    auto if_abort = __ MakeDeferredLabel();
    auto done = __ MakeLabel();

    __ Branch(check, &done, &if_abort);

    __ Bind(&if_abort);
    __ Unreachable(&done);

    __ Bind(&done);
  }

  return index;
}

During instruction selection Unreachable nodes are replaced by breakpoint opcodes.

// src/compiler/backend/instruction-selector.cc
void InstructionSelector::VisitUnreachable(Node* node) {
  OperandGenerator g(this);
  Emit(kArchDebugBreak, g.NoOutput());
}

// Emit function
Instruction* InstructionSelector::Emit(
    InstructionCode opcode, size_t output_count, InstructionOperand* outputs,
    size_t input_count, InstructionOperand* inputs, size_t temp_count,
    InstructionOperand* temps) {
  if (output_count >= Instruction::kMaxOutputCount ||
      input_count >= Instruction::kMaxInputCount ||
      temp_count >= Instruction::kMaxTempCount) {
    set_instruction_selection_failed();
    return nullptr;
  }
  // opcode = KArchDebugBreak
  Instruction* instr =
      Instruction::New(instruction_zone(), opcode, output_count, outputs,
                       input_count, inputs, temp_count, temps);
  return Emit(instr);
}

Deoptimization Kind

// Deoptimize bailout kind:
// - Eager: a check failed in the optimized code and deoptimization happens
//   immediately.
// - Lazy: the code has been marked as dependent on some assumption which
//   is checked elsewhere and can trigger deoptimization the next time the
//   code is executed.
// - Soft: similar to lazy deoptimization, but does not contribute to the
//   total deopt count which can lead to disabling optimization for a function.

References

https://doar-e.github.io/blog/2019/01/28/introduction-to-turbofan/