tag:blogger.com,1999:blog-52526647665545320442018-03-07T01:31:11.578+01:00RetrospectsIn order not to forget.Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-5252664766554532044.post-89991714584226334572013-01-10T15:34:00.000+01:002013-01-10T16:45:17.242+01:00Longest Text Producible in EditorI saw an unsolved problem at http://intearview.com:<br /><h2>Problem</h2>You are provided with four possible operations that can be done on the editor(each operation requires 1 keyboard hit)<br /><ol><li>A </li><li>Ctrl+A </li><li>Ctrl+C </li><li>Ctrl+V</li></ol>Now you can hit the keyboard N times and you need to find the maximum number of A's that can be printed. Also print the sequence of keyboard hits.<br /><br /><h2>Algorithm</h2>The first intuition on seeing this problem is we should use dynamic programming, as this problem has a clear structure of sub-problems. A straightforward trial is to distinguish different cases by the last operation. Nevertheless, difficulty arises when the last operation is Ctrl+V and the second last operation is not Ctrl+C.<br /><br />In order to solve this problem, we have to further dissect the properties of an optimal edit sequence:<br /><ol><li>Ctrl+A always immediately precedes Ctrl+C, otherwise it cannot be optimal.</li><li>If a Ctrl+V follows an A, it is safe to push Ctrl+A backward to bypass A.</li><li>If a combo Ctrl+A Ctrl+C precedes an A, it is better to push the combo forward to bypass A. That is the original edit sequence cannot be optimal.</li></ol>In summary, an optimal edit sequence can be shaped in a normal form, in which Ctrl+V's are grouped together, and a combo Ctrl+A Ctrl+C immediately precedes each group.<br /><br />As a consequence, if we denote the longest sequence of n operations by C(n), the recursion can be expressed as:<br />$$C(n) = max(1+ C(n-1), max_{i}((i+1)\times C(n-2-i)))$$ <br />in which \(1+C(n-1)\) is for the case when the last operation is A, and the latter part is for the case when a continuous block of Ctrl+V is at the end of the optimal edit sequence. Since we don't know the optimal size of the block, we have to do a linear scan which amounts to an algorithm of \(O(n^2)\)<br /><br /><h2>Code</h2><br /><pre class="prettyprint lang-java"><br />static String[] OPERATIONS = {"A", "Ctrl+A", "Ctrl+C", "Ctrl+V"};<br /><br />int longestText(int n) {<br /> assert n > 0;<br /> int[] length = new int[n + 1];<br /> length[0] = 0;<br /> length[1] = 1;<br /><br /> int[] optBlockSizes = new int[n+1]; // optimal size of Ctrl+V block ending at each position<br /> optBlockSizes[1] = 0; // no Ctrl+V block, i.e., A<br /><br /> for (int i = 2; i < n+1; i++) { // for each length of edit sequence<br /> int len = length[i-1] + 1;<br /> optBlockSizes[i] = 0; // i.e., A<br /> for (int j = 1; j < i-2; j++) { // for each size of Ctrl+V block<br /> int newLen = (j+1)*length[i-2-j];<br /> if (newLen > len) {<br /> len = newLen;<br /> optBlockSizes[i] = j;<br /> }<br /> }<br /> }<br /> <br /> // print the edit sequence <br /> printEditSequence(optBlockSizes);<br /><br /> return length[n];<br />}<br /><br />void printEditSequence(int[] pastBlocks) {<br /> // the array is [0,n], with the first element useless<br /> int n = pastBlocks.length - 1;<br /> String[] editSequence = new String[n+1];<br /> printEditSequence(editSequence, pastBlocks, n);<br />}<br /><br />// last is the largest unprocessed element index<br />void printEditSequence(String[] sequence, int[] blocks, int last) {<br /> if (last == 0) {// print sequence[1:n];<br /> for (int i = 1; i < sequence.length-1; i++) {<br /> System.out.print(sequence[i]+", ");<br /> }<br /> System.out.println(sequence[sequence.length-1]);<br /> }<br /> else {<br /> if (blocks[last] == 0) {<br /> sequence[last] = "A";<br /> printEditSequence(sequence, blocks, last-1);<br /> }<br /> else {<br /> int blockSize = blocks[last];<br /> for (int i = last; i > last - blockSize; last--) {<br /> sequence[i] = "Ctrl+V";<br /> } <br /> sequence[last-blockSize-1] = "Ctrl+A";<br /> sequence[last-blockSize] = "Ctrl+C";<br /> printEditSequence(sequence, blocks, last-blockSize-2);<br /> }<br /> }<br />}<br /><br /></pre>Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-26417591436558934482012-11-28T12:16:00.000+01:002012-11-28T13:36:08.357+01:00Fast ExponentiationComputing the power of a number is quite common. A simple form appears frequently in coding interviews is to compute the power of integers.<br /> <div class="bordered">Problem: given two integers n and k, compute the number \( n^k \).</div> One well-known algorithm is to perform exponentiation by repeated squaring. I saw the following pseudo code in Udi Manber's book: <pre class='prettyprint'><br />Input: n and k <br />Output: P<br />P := n;<br />j := k;<br />while j > 1 do<br /> P := P*P;<br /> if j mod 2 = 1 then<br /> P := P*n;<br /> j = j div 2;<br />end<br /></pre> The code is wrong, as can be seen by testing k = 5, n = 2. The above program produces 16, while 32 is the expected result. The simplest implementation of the repeated squaring is to use recursion, which is difficult to be wrong: <pre class = 'prettyprint lang-java'><br />long power(int n, int k) {<br /> assert n > 0 && k > 0;<br /> if (k == 1) return n;<br /><br /> if (k & 1 == 1) { // odd<br /> return n * power(n, k-1);<br /> }<br /> else {<br /> long half = power(n, k/2);<br /> return half*half;<br /> }<br />}<br /></pre> The pseudo code we presented at the beginning is iterative, and hence more efficient. However, as no correct loop invariant is maintained in it, it fails to compute the desired value. A simple intuition to compute the \(n^k\) is to scan the exponent k bit-wise from low to high. Every time we check whether the current bit i is 1, if so, we multiply the corresponding \(2^i\). There are all together \(log_2(k)\) bits. <pre class = 'prettyprint lang-java'><br />long power_iterative(int n, int k) {<br /> assert n > 0 && k > 0;<br /> int b = k; // high bits to be processed<br /> long result = 1;<br /> long base = n;<br /> while (b != 0) {<br /> // invariant: result*(base^b) == n^k<br /> if (b & 1 == 1) {<br /> result *= base;<br /> }<br /> base *= base;<br /> b >>= 1;<br /> }<br /> return result;<br />}<br /></pre>If the arguments are not guaranteed to be positive, we have to take extra care. <pre class = 'prettyprint lang-java'><br />double power(int n, int k) {<br /> if (n == 0 && k <= 0) <br /> throw new RuntimeException("For 0, only positive exponent allowed!");<br /> if (k == 0) return 1;<br /> if (n == 0) return 0; <br /><br /> int sign = 1; <br /> if (n < 0) {<br /> sign = (k & 1) == 1 ? -1 : 1;<br /> n = 0 - n;<br /> }<br /><br /> long result = sign;<br /> result *= power_iterative(n, Math.abs(k));<br /> <br /> return k > 0 ? result : 1.0/result;<br />}<br /></pre> One application of fast exponentiation is to compute Fibonacci number. We recall that the Fibonacci number can be expressed in a matrix form: $$ \begin{bmatrix} f(n+1) \\ f(n) \end{bmatrix}= \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}\times \begin{bmatrix} f(n) \\ f(n-1) \end{bmatrix} $$ If we denote by A the matrix: \( \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \), then we know $$\begin{bmatrix} f(n+1) \\ f(n) \end{bmatrix} = A^{n}\times\begin{bmatrix} f(1) \\ f(0) \end{bmatrix}$$ <br />The algorithm is in O(log(n)), while an iterative accumulation algorithm takes O(n). <pre class = 'prettyprint'><br />class Matrix{<br /> long a, b , c, d;<br /> Matrix(long i , long j, long k, long l) {a=i; b=j; c=k; d=l;}<br /> Matrix(Matrix m) {a = m.a; b = m.b; c = m.c; d = m.d;}<br />}<br />Matrix multiply (Matrix n, Matrix m) {<br /> return new Matrix(n.a*m.a+n.b*m.c, n.a*m.b+n.b*m.d, n.c*m.a+n.d*m.c, n.c*m.b+n.d*m.d);<br />}<br />Matrix fastExponentiation(Matrix m, int k) {<br /> assert k > 0;<br /> Matrix base = new Matrix(m);<br /> Matrix result = new Matrix(1, 0, 0, 1);<br /> int b = k;<br /> while (b != 0) {<br /> if(b & 1 == 1) result = multiply(result, base);<br /> base = multiply(base, base);<br /> b >>= 1;<br /> }<br />}<br />long fibonacci(int n) {<br /> assert n >= 0;<br /> if (n == 0 || n== 1) return 1;<br /> Matrix m = new Matrix(1, 1, 1, 0);<br /> Matrix power = fastExponentiation(m, n);<br /> return power.c;<br />}<br /></pre><br />Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-9184565872629586772012-11-23T17:04:00.002+01:002012-11-27T23:20:26.216+01:00Triangle in Undirected Graph<h3>Problem</h3>Given an undirected graph, design a O(V*E) algorithm to detect whether there is a triangle in the graph or not.<br /><br /><h3>Algorithm </h3>The idea of the algorithm comes from Manber's book. The central observation is that in order to form a triangle, say (i, j, k), then the corresponding rows of the first two vertices i and j should have both 1 on the k-th column. We can simply do a linear intersection to check this. However, in addition, we have to check that vertex i appears in the j-th row and vice versa. Using adjacency matrix, the complexity is then O(V^3), with V being the number of vertices. For sparse matrix, a O(V*E) algorithm is more efficient. Instead of using adjacency matrix, we now use adjacency lists. We can do the intersection using a procedure similar to merge-sort. More specifically, we perform the following: <ol><li>sort all the adjacency lists</li><li> for each two pair of vertices</li><ol><li>intersects the edges to find common end points</li><li>meanwhile check whether the pair is an edge</li></ol></ol>To see such an algorithm is O(V*E), we have to notice that each adjacency list has been intersected with all other V-1 adjacency lists. If we denote by \(L_i\) the length of the adjacency list incident on node i, then the total intersection time is: $$\Sigma_i (V-1)L_i = (V-1)\Sigma_i L_i = (V-1)E$$ We also notice that the cost of sorting the adjacency lists are: $$\Sigma_i L_ilog(L_i) \leq \Sigma_i L_ilog(E) = ElogE$$ To sum up the complexity is O(ElogE+V*E) = O(V*E). <h3>Code</h3><br /><pre class = "prettyprint lang-java"><br />class Graph{<br /> List<Integer > [] adjLists; // adjacency lists<br /> int n; // size of graph<br />}<br /><br />boolean containsTriangle(Graph g) {<br /> // sort all adjacency lists in arrays<br /> Integer[][] arrays = new Integer[g.n][];<br /> for (int i = 0; i < g.n, i++) {<br /> arrays[i] = g.adjLists[i].toArray(new Integer[0]);<br /> Arrays.sort(arrays[i]);<br /> }<br /> <br /> // iterate all pairs<br /> for (int i = 0; i < g.n; i++) {<br /> for (int j = i + 1; j < g.n; j++) {<br /> // test intersection and connectivity<br /> int pi = 0;<br /> int pj = 0;<br /> boolean intersected = false;<br /> while (pi < arrays[i].length && pj < arrays[j].length) {<br /> if (arrays[i][pi] == arrays[j][pj]) {<br /> intersected = true;<br /> break;<br /> }<br /> else if (arrays[i][pi] < arrays[j][pj]) {<br /> pi++;<br /> }<br /> else { <br /> pj++;<br /> }<br /> }<br /><br /> // check connectivity by checking whether i is in list of j<br /> boolean connected = false;<br /> for (Integer k : arrays[j]) {<br /> if (k == i) { <br /> connected = true;<br /> break;<br /> }<br /> else if (k > i) {<br /> break;<br /> }<br /> }<br /><br /> if (intersected && connected) return true; // early return<br /> }<br /> }<br /> return false;<br />}<br /><br /></pre>Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-41570060830268783962012-11-23T14:54:00.001+01:002012-11-24T00:33:15.019+01:00Minimal Truncation I saw an interview question on CareerCup. <br /><h3>Problem Description </h3> Given an array A of positive integers. Convert it to a sorted array with minimum cost. The only valid operation are:<br />1) Decrement with cost = 1<br />2) Delete an element completely from the array with cost = value of element<br /><br /><h3>Algorithm Development </h3>An intuitive idea is to use dynamic programming. The analysis is as follows:<br /><ol><li> find the min and max of the array int[] a in O(n) time, with n being the size of the array</li><li>maintain a n*(max-min+1) table memorizing minimal cost (MC): MC(i, k) is the minimal cost of transforming the subarray a[i:] with a lower bound k</li><li> set up recursion</li><ol><li> base case MC(n-1, k) = 0 if k<=a[n-1] or a[n-1] if k>a[n-1]</li><li> given the minimal costs of subarray a[i+1:], estimate MC for a[i:]</li><ol><li> if k > a[i], a[i] has to be deleted, otherwise the lowerbound is violated, which leads to MC(i, k) = a[i] + MC(i+1, k) (denoted by r1). The minimal cost of a[i:] with lower bound k is then the cost of deleting a[i] plus the minimal cost of the subsequence a[i+1:] with the same lower bound.</li><li> if k <= a[i], it is safe to always reserve a[i], because any truncated subsequence from a[i+1:] can be improved by prepending a possibly truncated portion of a[i]. However, it is unknown how much a[i] should be truncated to achieve optimality. What we know is the optimal sorted array for a[i:] is a truncated a[i] prepended to a subsequence from a[i+1:]. The key is to enumerate all the possible lower bounds for the subarray a[i+1:], which are the range [k, a[i]]. The lower end of the range is from the lower bound of the current sequence, while the upper end of the range is the largest value that may make a difference for the cost. For lower bound larger than a[i], there is always no cost for truncating a[i]. Moreover, MC(i, K1) < MC(i,K2) if K1 < K2. We hence have: <br />r2) MC(i, k) = min_{k<=j<=a[i]}{MC(i+1, j) + (a[i] - j)}<br />The minimal cost is then taken as the minimum of all candidate new lower bounds for subarray a[i+1:].<br />The cost is then the truncation cost (a[i]-j) plus the cost from subarray a[i+1:].<br />To make the recursion simpler, we can rewrite r2) as:<br />r3) MC(i, a[i]) = MC(i+1, a[i])<br />r4) MC(i, j) = min{MC(i, j-1), a[i]-j + MC(i+1, j)} for j from a[i]-1 to min<br /> </li></ol></ol><li>Finally, the minimal entry in the first column MC(0,k) is the minimal cost. If we need to find the truncation, just search in MC column wise bottom up for the first switch of the costs. If MC(0, k)->MC(0,k+1) is the first switch, then k is the maximal lower bound in a[0:]. Truncate a[0] to k, if smaller than k, remove a[0]. Now search right up in MC and similar for remaining columns. </li></ol><br /><h3>Code </h3><pre class="prettyprint lang-java">// time O(n*max); space O(n*max), reducible to O(n) if only minimal cost is needed<br />int findMinCost(int[] array){<br /> int min, max, n;<br /> min = findMin(array);// feasible in O(n)<br /> max = findMax(array);// feasible in O(n)<br /> n = array.length;<br /> <br /> // memory table for estimates<br /> int[][] mc = new int[n][max+1];// wasted a little bit memory; we only use mc[i][min:max]<br /> <br /> // initialize last column<br /> for(int k = min; k<=array[n-1]; k++){<br /> mc[n-1][k]=0; <br /> }<br /> for(int k = a[n-1] + 1; k <= max; k++){<br /> mc[n-1][k] = a[n-1];<br /> }<br /><br /><br /><br /> // fill up table column by column backwards<br /> for(int i = n-2; i >= 0; i--){<br /> for(int k = max; k>array[i]; k--){<br /> mc[i][k] = array[i]+mc[i+1][k];<br /> }<br /> <br /> mc[i][array[i]] = mc[i+1][array[i]];<br /> <br /> for(j = array[i]-1; j>=min; j--){<br /> mc[i][j] = Math.min(mc[i][j-1], array[i]-j+mc[i+1][j]);<br /> }<br /> }<br /> <br /> return mc[0][min];<br />}<br /><br /><br /><br />/**<br />* Time: O(n+max)<br />*/<br />Integer[] extractTruncatedArray(int[] array, int[][] mc, int min, int max){<br /> ArrayList<integer> list = new ArrayList<integer>();<br /> int lowerBound = min;//initial lower bound<br /> for(int i = 0; i < array.length; i++) {</integer></integer><integer><integer>// extract truncated elment at i<br /> // find first switch<br /> while(lowerBound < max && mc[i][lowerBound] == mc[i][lowerBound+1]){<br /> lowerBound++;<br /> }<br /> if(a[i] >= lowerBound){//truncate a[i]<br /> list.add(lowerBound);<br /> }//otherwise a[i] is deleted<br /> }<br /> return list.toArray(new Integer[0]);<br />}<br /></integer></integer></pre>Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-25220379733120353812012-11-20T15:32:00.000+01:002014-05-05T21:39:25.828+02:00Rang Minimum QueryRange minimum query is the problem to answer algorithmically the minimum of an arbitrary range in the form of [lo, hi], given a large integer array a[0:n). Of course without pre-processing, each query is O(n), which is unacceptable if the queries are frequent. Another straightforward way is to memorize the answer to all ranges and perform a table lookup for the answer. Though this leads to an O(1) query answering operation, the pre-processing takes \(O(n^2)\) time. More problematic, the space complexity is also \(O(n^2)\). For an array to an order of \(10^6\), the space consumption already exceeds the memory size of normal commodity machines nowadays. Our goal is derive an algorithm that has a fast enough query answering time and meanwhile is not too expensive regarding pre-processing.<br /><br />We first identify the API of the problem:<br /><pre class="prettyprint lang-java">void preprocess(int[] a);<br /><br />int min(int lo, int hi);<br /></pre><br />We denote the complexity explicitly into two parts: pre-processing and query answering. For instance, the direct processing without pre-processing is of complexity \(O(1)-O(n)\), whereas the memorizing-all approach is of complexity \(O(n^2)-O(1)\). <br /><br /><h3>1.Disjoint Segment Algorithm \(O(n)-O(\sqrt{n})\)</h3>The first algorithm comes from an interview. The idea is to divide the whole array into segments and maintain a summary (in our case, the minimum) of the segment. When answering a query, we need to gather all the segments overlapping with the query range and return an answer. Because the covered segments are already aggregated during the pre-processing time, the computation is saved. One question comes into mind: how do we determine the length of a segment. We notice that a range may cover several segments completely, and overlap with at most two segments additionally. Therefore, we can optimize the estimated cost of a query, depending on whether we opt for worst case or average case. As for average case, we can simply choose each segment to be of length \(\sqrt{n}\).<br /><br /><pre class="prettyprint lang-java">int[] segmentMin; <br />int segmentLen;<br /><br />void preprocess(int[] a) { <br /> assert a != null && a.length != 0; <br /> int len = a.length;<br /> segmentLen = Math.sqrt(len);<br /><br /> segmentMin = new int[Math.ceil(len/segmentLen)];<br /><br /> for (int i = 0; i*segmentLen < len; i++) {<br /> // invariant: already processed i segments<br /> int min = a[i*segmentLen];<br /> int upper = Math.min((i+1)*segmentLen, a.length);<br /> for (int j = i*segmentLen + 1; j < upper; j++) {<br /> min = Math.min(min, a[j]);<br /> }<br /> // assert min == range_min[i*segmentLen,<br /> // (i+1)*segmentLen);<br /> segmentMin[i] = min;<br /> }<br />} <br /><br />int min(int lo, int hi) {<br /> assert hi > = lo;<br /> int rangeMin = a[lo];<br /><br /> // assert segmentLen != 0;<br /> int startSegment = Math.ceil(lo/(float)segmentLen);<br /> int endSegment = Math.floor(hi/(float)segmentLen);<br /><br /> // collect summaries from fully covered segments<br /> for (int i = startSegment; i < endSegment; i++) {<br /> rangeMin = Math.min(rangeMin, segmentMin[i];<br /> }<br /> <br /> if (lo % segmentLen != 0) { // there is a partial overlap<br /> for (int i = lo; i < segmentLen*startSegment; i++) {<br /> rangeMin = Math.min(rangeMin, a[i]);<br /> }<br /> } <br /> <br /> if (hi % segmentLen != 0) { // partial overlap<br /> for (int i = endSegment*segmentLen; i <= hi; i++) {<br /> rangeMin = Math.min(rangeMin, a[i]);<br /> }<br /> }<br /><br /> return rangeMin;<br />} <br /></pre><br />In the preprocessing phase, we scan the whole array. Therefore, the complexity is O(n). Each query will collect at most \(O(\sqrt{n})\) summaries, and scan two segments, each of which is of length \(\sqrt{n}\). Therefore, the query is performed in \(O(\sqrt{n})\) time.<br /> <code> </code><br /><h3>2. Variable Length Summary Algorithm \(O(nlog(n))-O(1)\)</h3>I saw this algorithm in Rujia Liu's book on OI. Compared to the disjoint segment algorithm, this algorithm stores more summaries on overlapping segments. The rise in the number of summaries gives better query answering performance. In one sentence, we store at each index i the range minimum in a segment of length of the form \(2^j\). As there are n positions in the array, and the number of different length options are log(n), we are able to store all the summary in a table of the size nlog(n). When we need to process a query [lo, hi], we first determine the length of the range, say L. Let k be \(\lfloor log(L) \rfloor\), the range minimum is then minimum of the two summaries at positions (lo, k), (hi + 1- L/2, k).<br />We now give the complete code in Java. <br /><pre class="prettyprint lang-java"><br />int[][] summary; // table of the form summary[beginIndex][lengthIndex]<br />int length; // array length<br />int level; // maximal exponent of a range length<br /><br />void preprocess(int[] a) {<br /> assert a != null && a.length > 0;<br /> length = a.length;<br /> int level = Math.ceil(Math.log(length)/Math.log(2));<br /> summary = new int[length][level+1];<br /> <br /> for (int i = 0; i < length; i++) {<br /> summary[i][0] = a[i];<br /> } <br /><br /> // build summary bottom-up<br /> for (int l = 1; l < level + 1; l++) {<br /> // invariant: already built l levels<br /> for (int i = 0; i < length; i++) {<br /> summary[i][l] = Math.min(getSummary(i, l-1), getSummary(i + 1<<(l-1), l-1));<br /> } <br /> }<br /> <br />}<br />// handle overflow index<br />int getSummary(int pos, int level) {<br /> if (pos >= length) {<br /> pos = length - 1;<br /> } <br /> return summary[pos][level];<br />}<br /><br />int min(int lo, int hi) {<br /> assert hi >= lo;<br /> if (hi == lo) return summary[lo][0]; // a single element<br /> int rangeLen = hi+1-lo;<br /> assert rangeLen > 1;<br /> int halfRangeLevel = Math.ceil(Math.log(rangeLen)/Math.log(2)) - 1;<br /> assert halfRangeLevel >= 0;<br /> return Math.min(getSummary(lo, halfRangeLevel),<br /> getSummary(hi + 1 - (1 << halfRangelevel), halfRangelevel)<br /> );<br />}<br /></pre> Since each range query only incurs two table lookup operations, we know it is O(1). The crux of O(nlog(n)) preprocessing time lies in the fact we employ a bottom-up dynamic programming computation for the summaries. <h3>3. Hierarchical Summary Algorithm \(O(n)-O(log(n))\) </h3>This algorithm is inspired by an exercise in Udi Manber's book on algorithms. The basic idea is to organize hierarchical summaries in a tree, with the array elements as the leaves. Building the tree bottom-up requires O(n) time and space, while query requires O(log(n)) time to navigate the tree. Different from the disjoint summary, the summaries are overlapping. Furthermore, also different from the variable length summary, we now build summaries in a hierarchy in which a parent summary covers the range of two children. The summaries now form a tree. In order to make the bottom-up building convenient, we use linked lists to memorize nodes on the same level. <pre class="prettyprint lang-java"><br />class Node {<br /> int min; // range min<br /> int lo, hi; // remembering range [lo, hi)<br /> Node left, right;<br /> Node(int l, int h, int m, Node left, Node right) {<br /> lo = l; hi = h; min = m; this.left = left, this.right = right;<br /> }<br /> Node (in l, int h, int m) {<br /> lo = l; hi = h; min = m; <br /> }<br />}<br /><br /><br />Node root;<br /><br />void preprocess(int [] a) {<br /> assert a != null && a.length > 0;<br /> int level = Math.ceil(Math.log(a.length)/Math.log(2));<br /> LinkedList<Node> [] nodesByLevel = new LinkedList<Node>[level + 1];<br /> nodesByLevel[0] = new LinkedList<Node>();<br /> for (int i = 0; i < a.length; i++) {<br /> nodesByLevel[0].add(new Node(i, i+1, a[i]));<br /> }<br /><br /> for (int l = 1; l < level + 1; l++) {<br /> // invariant: already processed l levels<br /> // invariant: level l summarizes range of length 2^l (except for boundaries)<br /> LinkedList<Node> pre = nodesByLevel[l-1];<br /> nodesByLevel[l] = new LinkedList<Node>(); // current level<br /> Node first = null, second = null;<br /> Iterator<Node> it = pre.iterator();<br /> while(it.hasNext()) {<br /> first = it.next();<br /> if (it.hasNext()) {<br /> second = it.next();<br /> }<br /> int lo = first.lo;<br /> int hi = second == null ? first.hi : second.hi;<br /> int min = second == null ? first.min : Math.min(first.min, second.min);<br /> nodesByLevel[l].add(new Node(lo, hi, min, first, second));<br /> }<br /> }<br /> root = nodesByLevel[level];<br />}<br /><br />int min(int lo, int hi) {<br /> return min(root, lo, hi+1);<br />}<br /><br />// internally we use [lo, hi)<br />int min(Node node, int lo, int hi) [<br /> if (node == null) return Integer.MAX_VALUE;<br /> if (!isOverlapping(node, lo, hi)) {<br /> return Integer.MAX_VALUE;<br /> }<br /> else if ( isCovered(node, lo, hi) ) {<br /> return node.min;<br /> }<br /><br /> // assert node range partially overlaps with query range<br /> // or query range covered by node range<br /> int leftMin = min(node.left, lo, hi);<br /> int rightMin = min(node.right, lo, hi);<br /><br /> return Math.min(leftMin, rightMin);<br />}<br /><br />// is the node range overlapping with [lo, hi)<br />boolean isOverlapping(Node node, int lo, int hi) {<br /> assert node != null;<br /> return node.hi > lo && node.lo < hi;<br />}<br />// Is the range of the node covered by [lo, hi)<br />boolean isCovered(Node node, int lo, int hi) {<br /> assert node != null;<br /> return lo <= node.lo && hi >= node.hi; <br />}<br /></pre> First, the preprocessing is in O(n), because the tree has O(n) nodes, and to create each node, we only need constant time. To see the query is in O(log(n)) is more involved. If a node range is fully covered by a query or disjoint with a query, then it directly returns and incurs no additional cost. The problem arises when a query partially overlaps with both children of a node, which perform two recursions. The main observation is a query can partially overlaps with two children of a node only once, after which only one out of disjoint or fully covered can happen. More intuitively, the query processing navigates the summary tree to find the two ends without exploring internal nodes. As the tree is of height O(log(n)), the query is also O(log(n)). Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-90775065126025069262009-03-17T19:31:00.003+01:002009-03-17T19:43:18.865+01:00LOB in Hibernate over DB2 9.5Using BLOB/CLOB in hibernate may cause exceptions complaining "LOB closed" when running on top of DB2 (see <a href="http://www.hibernate.org/120.935.html">Hibernat Users FAQ</a>). The reason can be the <a href="http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.apdv.java.doc/doc/cjvjdlob.htm">progressive streaming feature of DB2</a>.<br /><br />There are two solutions:<br /><ol><br /> <li>Use iterate() of ResultSet and load LOB content during each iteration.<br /> <li>Disable progressive streaming by specifying options in the JDBC url:<br /> jdbc:db2://localhost:50000/DatabaseName:fullyMaterializeLobData=false;progressiveStreaming=2;progresssiveLocators=2;<br /> <span style="font-style:italic;">Please note: fullMaterializeLobData is set to false, which enables lazy loading.</span><br /></ol>Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-82985089231221118782008-11-13T11:07:00.004+01:002008-11-13T12:31:26.716+01:00Two ways to make bad documentationSince our group are going to switch to a modern Content Management System (CMS), I am investigating candidate products. Two promising choices are<a href="http://lenya.apache.org/"> Apache Lenya</a> and <a href="http://plone.org/">Plone</a>. I have to say that going deep into neither of the two is pleasing.<br /><br />Lenya suffers from lacking documentation. Till now, not a single book is published about it. Furthermore, it is migrating to version 2.0, which renders most of existing documentation useless. At the time of writing, there is no complete manual on the Lenya site. However, because of the conceptual simplicity, Lenya is easier to understand in my opinion. The use of Apache Cocoon also adds to the value of Lenya.<br /><br />It is just the other side for Plone. We are just flooded by documentations, which are full of specific terminology known only inside the community. Again, almost all the existing documentations are out of date for Plone 3. Furthermore, i have a feeling that the document authors are not really serious programmers. They tend to use quite informal and inaccurate descriptions. Maybe that is the way the supporting staff make money. Plone gives me a feeling of SAP, which always requires 2nd development that is more interesting to service staff than to users.<br /><br />Lessons learned: <br />- documentation should be up to date and understandable:<br /> There should be at least one entry level sample that can be run. If some special terms really have to be used, define them first. Finally, combining documentations for both front-end users and developers would just be confusing.<br />- Simplicity is much better than complexity<br /> There should be some overview over the architecture or main mechanism. Although a framework can be rather complicated, there should be some central ideas that are so simple that humans can bear in mind.Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com1tag:blogger.com,1999:blog-5252664766554532044.post-31442847088145953092008-02-12T12:43:00.001+01:002008-02-12T12:55:06.173+01:00My Visit to Philips ResearchMy most valuable experience from the visit is not about the technical problems being solved. In fact two of three labs i visited were dealing with some medical image processing, operated by physical scientists. Although radioactivity could still be understood, a pair of photons was too distant from me. The remaining project was trying to aim stroke rehabilitation using sensors and computers. During the lunch served around 1pm, we got opportunity talking with those senior researchers with background in physics. It was fun talking with them. A person who has received enough scientific training behaves somehow differently, when they consider problems. It is independent on any particular knowledge, but the insight on solutions, quick understanding of questions, and pragmatic way of problem solving.Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-74601042025761479432007-07-02T14:44:00.002+02:002007-07-03T18:02:17.002+02:00Hooray For Mylyn : Task-focused Development in Eclipse 3.3 (Europa)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.eclipse.org/mylyn/images/tasklist-splash.jpg"><img style="width: 172px; height: 420px;float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="http://www.eclipse.org/mylyn/images/tasklist-splash.jpg" border="0" alt="" /></a><br />Last Friday, Eclipse 3.3 (code name Europa) was released. As an enthusiast for Eclipse Framework, i cannot wait take a first taste. From my point of view, the most attractive feature, besides other numerous improvements, is the integration of Mylyn, a task focus tool. Some opinions are to be given in the rest of this review, which are NOT intended as an tutorial of this tool. Anyone searching for a tutorial can see [1, 2]. A few high level characteristics of Mylyn are going to be discussed, together with some thinking why it is beneficial.<br /><br /><span style="font-size:130%;">Mylyn in 3 Aspects</span><br />Mylyn is currently described as "task focused UI" officially [4]. However, it is much more than UI. Benefits provided by the tool can be categorized into three aspects:<br /><span style="font-weight: bold; font-style: italic;font-size:100%;" >Task-focus UI Overlay </span><br />A task is something like a bug or feature requirement, which is usually the unit of work programmers face with everyday. The motivation of task-focus is to easy programmers out of so-called information overloading, which is the case when lots of time is spent solely on navigating and searching for relevant elements(e.g., packages, classes, aspects, or methods) due to numerous projects in workspace or large enterprise projects. The idea of task focus, inspired by Aspect Oriented Methodology [5], tries to separate out elements (concerns of tasks) , which are usually scattered throughout project(s), into units of tasks. Furthermore, the seamless integration with Eclipse GUI makes Mylyn an undoubted success. The use of Mylyn can be pervasive in Europa. You can find it in, just to name a few, package browser, java editor, synchronizing view, context menu, and etc. All the visual elements have an overlay of task-focus, which mainly does filtering and folding, leaving only elements relevant with current activated task.<br /><span style="font-weight: bold; font-style: italic;">Adaptive</span><span style="font-weight: bold; font-style: italic;"> Relevance Discovery</span><br />The UI overlay works all fine, in presence of task relevance data that maintains all elements relevant with each task. Then, where does such information come from? Mylyn employs a quite smart way: all user interactions are intercepted and recorded. Thus when a task is activated, the knowledge of relevance can be learned gradually. Intuitively, the relevance can be measured simply by counting the references(selecting and editing), while decaying old statistics over time. In fact, Mylyn adopted a tree-structured degree-of-interest model [5]. Since the knowledge of relevance learned by recording programmer activity, is valuable as domain expertise. For example, if someone has to work with legacy issues originally handled by others,<br />it would be just too nice to have the relevance data available. That is what Mylyn does: relevance data (called <span style="font-style: italic;">context </span>of task) can be stored and shared via issue tracking systems (see the next section). We call this type of explicit management of domain expertise <span style="font-style: italic;">expertise as asset</span>. It is especially valuable for teamworks (e.g., community driven projects), in which knowledge transfer is expensive.<br /><span style="font-weight: bold; font-style: italic;">Tracking Repository Integration</span><br />It is always good to stand on the shoulder of giants. Bug/Issue tracking systems are necessary components of most of the open-source projects today. Mylyn is finely integrated with systems such as Bugzilla, Trac, and JIRA. It provides nice GUI based ticket tracking repository access from IDE. Operations such as query, task submission, automatic notification and offline editing are supported. According to our experience, integration with IDE not only speed up productivity, but also encourages use of tracking system in internal projects ( no requirement for public ticket submission). As we stated earlier, Mylyn stores the context (relevance data) in Bugzilla, and allows patches for contexts. Therefore, expertise is not only shared but also co-worked on.<br /><br /><span style="font-size:130%;"></span><br /><span style="font-size:130%;">Lessons Learned</span><br /><span style="font-weight: bold; font-style: italic;">Separate Concerns</span>: it is our old friend. From procedures to classes, from scattered crosscutting codes to aspects, we are always employing the divide-and-conquer again and again. Although algorithmically divide-and-conquer does not always lead to better solution, it is almost always good to decompose problems to reasonable sub-tasks in software engineering.<br /><span style="font-weight: bold; font-style: italic;">Expertise as Asset</span>: reuse is really a main issue in improving productivity and reproductivity of a business. For code reuse we have components/libraries; for design reuse we have Model Driven Architecture (model as asset) ; now we have expertise reuse, not for automation but for knowledge transfer or documentation. Code communicates, but easier form of communication is even better.<br /><span style="font-weight: bold;">Integration Is Good</span>: interoperability between softwares or data repositories is long asking for a solution. The result of successful interoperability usually turns out to be integration. Data Warehouse, Enterprise Application Integration, Service Oriented Architecture can all be deemed as replies to answer such requirements.<br /><br /><br /><span style="font-size:130%;">Conclusion</span><br />I get excited over the appearance of task-focused UI and believe it will eventually become standard for IDE's to acquire such functionalities.<br /><br /><br /><br /><span style="font-size:130%;">References</span><br /><span style="font-size:100%;"><br />[1] <a href="http://www-128.ibm.com/developerworks/java/library/j-mylar1/#author">Mik Kersten</a> </span><span style="font-size:100%;"><span style="font-weight: bold;">Task-focused programming with Mylar</span>. Available at <a href="http://www-128.ibm.com/developerworks/java/library/j-mylar1/">http://www-128.ibm.com/developerworks/java/library/j-mylar1/</a><br /><br />[2] <span style="font-weight: bold;">Mylyn User Guide</span>. Available at <a href="http://wiki.eclipse.org/index.php/Mylyn_User_Guide">http://wiki.eclipse.org/index.php/Mylyn_User_Guide</a><br /><br />[3] Mik Kersten, Rob Elves and Gail Murphy. <span style="font-weight: bold;">WYSIWYN: Using Task Focus to Ease Collaboration</span>. </span><span style="font-size:100%;">CSCW Workshop, </span><span style="font-size:100%;"> 2006.<br /><br />[4] Mylyn Project. <a href="http://www.eclipse.org/mylyn/index.php">http://www.eclipse.org/mylyn/index.php</a><br /><br />[5] Mik Kersten and Gail Murphy. </span><span class="paperTitle"><span style="font-size:100%;"><span style="font-weight: bold;">Mylar: a degree-of-interest model for IDEs</span>. Aspect Oriented Software Development, 2005.<br /><br /></span></span><span style="font-size:100%;"><br /></span>Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0tag:blogger.com,1999:blog-5252664766554532044.post-30781442179439561012007-01-22T14:00:00.000+01:002007-01-22T14:43:59.684+01:00A Typical Anti-pattern of using SAXRecently I just encountered a small problem with parsing XML documents with SAX. As is well known, the handler interface in SAX has three major call-backs for elements, namely <span style="font-style: italic;">startElement</span>, <span style="font-style: italic;">endElement</span>, and <span style="font-style: italic;">characters . </span><span> The first two methods are called starting/ending a tag, while the third method is aimed at handling character data between tags. The problem lies in the contract of the method <span style="font-style: italic;">characters(char[] ch, int start, int length)</span> . In the specification of SAX 2.0, it is stated that implementations of parsers are <span style="font-weight: bold;">not </span>required to ensure encapsulating one string between two corresponding tags in one call of this method.<br /><br />Thus in order to get the characters between tags, be careful to buffer all the chunks through possibly several calls of <span style="font-style: italic;">characters()</span> . What's more, do build guard boolean variables to collect the characters belonging to one element. Otherwise "\n" between tags might be also collected.<br /></span><span style="font-style: italic;"> </span>Xianghttp://www.blogger.com/profile/13023740612959081805noreply@blogger.com0