Tuesday, November 20, 2012

Rang Minimum Query

Range minimum query is the problem to answer algorithmically the minimum of an arbitrary range in the form of [lo, hi], given a large integer array a[0:n). Of course without pre-processing, each query is O(n), which is unacceptable if the queries are frequent. Another straightforward way is to memorize the answer to all ranges and perform a table lookup for the answer. Though this leads to an O(1) query answering operation, the pre-processing takes \(O(n^2)\) time. More problematic, the space complexity is also \(O(n^2)\). For an array to an order of \(10^6\), the space consumption already exceeds the memory size of normal commodity machines nowadays. Our goal is derive an algorithm that has a fast enough query answering time and meanwhile is not too expensive regarding pre-processing.

We first identify the API of the problem:
void preprocess(int[] a);

int min(int lo, int hi);

We denote the complexity explicitly into two parts: pre-processing and query answering. For instance, the direct processing without pre-processing is of complexity \(O(1)-O(n)\), whereas the memorizing-all approach is of complexity \(O(n^2)-O(1)\).

1.Disjoint Segment Algorithm \(O(n)-O(\sqrt{n})\)

The first algorithm comes from an interview. The idea is to divide the whole array into segments and maintain a summary (in our case, the minimum) of the segment. When answering a query, we need to gather all the segments overlapping with the query range and return an answer. Because the covered segments are already aggregated during the pre-processing time, the computation is saved. One question comes into mind: how do we determine the length of a segment. We notice that a range may cover several segments completely, and overlap with at most two segments additionally. Therefore, we can optimize the estimated cost of a query, depending on whether we opt for worst case or average case. As for average case, we can simply choose each segment to be of length \(\sqrt{n}\).

int[] segmentMin; 
int segmentLen;

void preprocess(int[] a) { 
  assert a != null && a.length != 0; 
  int len = a.length;
  segmentLen = Math.sqrt(len);

  segmentMin = new int[Math.ceil(len/segmentLen)];

  for (int i = 0; i*segmentLen < len; i++) {
       // invariant: already processed i segments
       int min = a[i*segmentLen];
       int upper = Math.min((i+1)*segmentLen, a.length);
       for (int j = i*segmentLen + 1; j < upper; j++) {
           min = Math.min(min, a[j]);
       }
       // assert min == range_min[i*segmentLen,
       //               (i+1)*segmentLen);
       segmentMin[i] = min;
  }
} 

int min(int lo, int hi) {
  assert hi > = lo;
  int rangeMin = a[lo];

  // assert segmentLen != 0;
  int startSegment = Math.ceil(lo/(float)segmentLen);
  int endSegment = Math.floor(hi/(float)segmentLen);

  // collect summaries from fully covered segments
  for (int i = startSegment; i <  endSegment; i++) {
      rangeMin = Math.min(rangeMin, segmentMin[i];
  }
   
  if (lo % segmentLen != 0) { // there is a partial overlap
    for (int i = lo; i < segmentLen*startSegment; i++) {
      rangeMin = Math.min(rangeMin, a[i]);
    }
  }  
  
  if (hi % segmentLen != 0) { // partial overlap
    for (int i = endSegment*segmentLen; i <= hi; i++) {
      rangeMin = Math.min(rangeMin, a[i]);
    }
  }

  return rangeMin;
} 

In the preprocessing phase, we scan the whole array. Therefore, the complexity is O(n). Each query will collect at most \(O(\sqrt{n})\) summaries, and scan two segments, each of which is of length \(\sqrt{n}\). Therefore, the query is performed in \(O(\sqrt{n})\) time.
  

2. Variable Length Summary Algorithm \(O(nlog(n))-O(1)\)

I saw this algorithm in Rujia Liu's book on OI. Compared to the disjoint segment algorithm, this algorithm stores more summaries on overlapping segments. The rise in the number of summaries gives better query answering performance. In one sentence, we store at each index i the range minimum in a segment of length of the form \(2^j\). As there are n positions in the array, and the number of different length options are log(n), we are able to store all the summary in a table of the size nlog(n). When we need to process a query [lo, hi], we first determine the length of the range, say L. Let k be \(\lfloor log(L) \rfloor\), the range minimum is then minimum of the two summaries at positions (lo, k), (hi + 1- L/2, k).
We now give the complete code in Java.
int[][] summary; // table of the form summary[beginIndex][lengthIndex]
int length; // array length
int level; // maximal exponent of a range length

void preprocess(int[] a)  {
  assert a != null && a.length > 0;
  length = a.length;
  int level = Math.ceil(Math.log(length)/Math.log(2));
  summary = new int[length][level+1];
  
  for (int i = 0; i < length; i++) {
    summary[i][0] = a[i];
  }  

  // build summary bottom-up
  for (int l = 1; l < level + 1; l++) {
    // invariant: already built l levels
    for (int i = 0; i < length; i++) {
      summary[i][l] = Math.min(getSummary(i, l-1), getSummary(i + 1<<(l-1), l-1));
    } 
  }
  
}
// handle overflow index
int getSummary(int pos, int level) {
  if (pos >= length) {
    pos = length - 1;
  } 
  return summary[pos][level];
}

int min(int lo, int hi) {
  assert hi >= lo;
  if (hi == lo) return summary[lo][0]; // a single element
  int rangeLen = hi+1-lo;
  assert rangeLen > 1;
  int halfRangeLevel = Math.ceil(Math.log(rangeLen)/Math.log(2)) - 1;
  assert halfRangeLevel >= 0;
  return Math.min(getSummary(lo, halfRangeLevel),
                  getSummary(hi + 1 - (1 << halfRangelevel), halfRangelevel)
                 );
}
Since each range query only incurs two table lookup operations, we know it is O(1). The crux of O(nlog(n)) preprocessing time lies in the fact we employ a bottom-up dynamic programming computation for the summaries.

3. Hierarchical Summary Algorithm \(O(n)-O(log(n))\)

This algorithm is inspired by an exercise in Udi Manber's book on algorithms. The basic idea is to organize hierarchical summaries in a tree, with the array elements as the leaves. Building the tree bottom-up requires O(n) time and space, while query requires O(log(n)) time to navigate the tree. Different from the disjoint summary, the summaries are overlapping. Furthermore, also different from the variable length summary, we now build summaries in a hierarchy in which a parent summary covers the range of two children. The summaries now form a tree. In order to make the bottom-up building convenient, we use linked lists to memorize nodes on the same level.
class Node {
  int min; // range min
  int lo, hi; // remembering range [lo, hi)
  Node left, right;
  Node(int l, int h, int m, Node left, Node right) {
    lo = l; hi = h; min = m; this.left = left, this.right = right;
  }
  Node (in l, int h, int m) {
    lo = l; hi = h; min = m; 
  }
}


Node root;

void preprocess(int [] a) {
  assert a != null && a.length > 0;
  int level = Math.ceil(Math.log(a.length)/Math.log(2));
  LinkedList [] nodesByLevel = new LinkedList[level + 1];
  nodesByLevel[0] = new LinkedList();
  for (int i = 0; i < a.length; i++) {
    nodesByLevel[0].add(new Node(i, i+1, a[i]));
  }

  for (int l = 1; l < level + 1; l++) {
    // invariant: already processed l levels
    // invariant: level l summarizes range of length 2^l (except for boundaries)
    LinkedList pre = nodesByLevel[l-1];
    nodesByLevel[l] = new LinkedList(); // current level
    Node first = null, second = null;
    Iterator it = pre.iterator();
    while(it.hasNext()) {
      first = it.next();
      if (it.hasNext()) {
        second = it.next();
      }
      int lo = first.lo;
      int hi = second == null ? first.hi : second.hi;
      int min = second == null ? first.min : Math.min(first.min, second.min);
      nodesByLevel[l].add(new Node(lo, hi, min, first, second));
    }
  }
  root = nodesByLevel[level];
}

int min(int lo, int hi) {
  return min(root, lo, hi+1);
}

// internally we use [lo, hi)
int min(Node node, int lo, int hi) [
  if (node == null) return Integer.MAX_VALUE;
  if (!isOverlapping(node, lo, hi)) {
    return Integer.MAX_VALUE;
  }
  else if ( isCovered(node, lo, hi) ) {
    return node.min;
  }

  // assert node range partially overlaps with query range
  // or query range covered by node range
  int leftMin = min(node.left, lo, hi);
  int rightMin = min(node.right, lo, hi);

  return Math.min(leftMin, rightMin);
}

// is the node range overlapping with [lo, hi)
boolean isOverlapping(Node node, int lo, int hi) {
  assert node != null;
  return node.hi > lo && node.lo < hi;
}
// Is the range of the node covered by [lo, hi)
boolean isCovered(Node node, int lo, int hi) {
  assert node != null;
  return lo <= node.lo && hi >= node.hi; 
}
First, the preprocessing is in O(n), because the tree has O(n) nodes, and to create each node, we only need constant time. To see the query is in O(log(n)) is more involved. If a node range is fully covered by a query or disjoint with a query, then it directly returns and incurs no additional cost. The problem arises when a query partially overlaps with both children of a node, which perform two recursions. The main observation is a query can partially overlaps with two children of a node only once, after which only one out of disjoint or fully covered can happen. More intuitively, the query processing navigates the summary tree to find the two ends without exploring internal nodes. As the tree is of height O(log(n)), the query is also O(log(n)).

No comments: