Stop Reduce function in Hadoop on condition -
I have a low function where I want to stop reducing the function after processing some 'N' keys. I have set up a counter to increase every key, and the situation is being satisfied due to low position.
Here is the code
public class virtual words {public static class mapper and lating; Longweightable, text, intritable, intrubivative & gt; {Private Final Static IntWritable A = New IntWritable (1); Private text word = new text (); Private IntWritable LONG = New IntWritable (); Public Zero map (long-term appropriate key, text value, reference reference) throws IOException, interrupted; Expression {string line = value Tutorial (); StringTokenizer Tokenizer = New StringTokenizer (line); While (tokenizer.hasMoreTokens ()) {string lword = tokenizer.nextToken (); Leng.set (lword.length ()); Context.write (leg, one); }}} Reducer reduction in public stationary and lt; IntWritable, IntWritable, IntWritable, IntWritable & gt; {Int count = 0; The public throws IOException (intervettible key, iterable & lt; intWritable> values, reference references); Exception {int sum = 0; (IntWritable Val: value) {sum + = val.get (); Count ++; } Context.write (Key, New IntWritable (Yoga)); If (calculations> 19) returns; }} Is there any other way that I get it?
By overriding the run () of the Reducer class (new API) It can be obtained Reducer reduces public static class and lt; IntWritable, IntWritable, IntWritable, IntWritable & gt; {// Less method here Override the Run () @ Override Public Wide Run (context reference) throws IOException, interrupted; Exception {Setup (reference); Int count = 0; While (context.nextKey ()) {reduce (context.getCurrentKey (), context.getValues (), reference); {if (count ++ & lt; n); } Else {// exit or whatever you want}} cleaning (reference); }}
Comments
Post a Comment