2018年8月15日 星期三

[ 文章收集 ] Making Apache Spark Easier to Use in Java with Java 8

Source From Here 
Preface 
One of Apache Spark’s main goals is to make big data applications easier to write. Spark has always had concise APIs in Scala and Python, but its Java API was verbose due to the lack of function expressions. With the addition of lambda expressions in Java 8, we’ve updated Spark’s API to transparently support these expressions, while staying compatible with old versions of Java. This new support will be available in Apache Spark 1.0. 

A Few Examples 
The following examples show how Java 8 makes code more concise. In our first example, we search a log file for lines that contain “error”, using Spark’s filter and count operations. The code is simple to write, but passing a Function object to filter is clunky: 
Java 7 search example: 
  1. JavaRDD lines = sc.textFile("hdfs://log.txt").filter(  
  2.   new Function() {  
  3.     public Boolean call(String s) {  
  4.       return s.contains("error");  
  5.     }  
  6. });  
  7. long numErrors = lines.count();  
(If you’re new to Spark, JavaRDD is a distributed collection of objects, in this case lines of text in a file. We can apply operations to these objects that will automatically be parallelized across a cluster.

With Java 8, we can replace the Function object with an inline function expression, making the code a lot cleaner: 

Java 8 search example: 
  1. JavaRDD lines = sc.textFile("hdfs://log.txt")  
  2.                           .filter(s -> s.contains("error"));  
  3. long numErrors = lines.count();  
The gains become even bigger for longer programs. For instance, the program below implements Word Count, by taking a file (read as a collection of lines), splitting each line into multiple words, then counting the words with a reduce function. 

Java 7 word count: 
  1. JavaRDD lines = sc.textFile("hdfs://log.txt");  
  2.   
  3. // Map each line to multiple words  
  4. JavaRDD words = lines.flatMap(  
  5.   new FlatMapFunction() {  
  6.     public Iterable call(String line) {  
  7.       return Arrays.asList(line.split(" "));  
  8.     }  
  9. });  
  10.   
  11. // Turn the words into (word, 1) pairs  
  12. JavaPairRDD ones = words.mapToPair(  
  13.   new PairFunction() {  
  14.     public Tuple2 call(String w) {  
  15.       return new Tuple2(w, 1);  
  16.     }  
  17. });  
  18.   
  19. // Group up and add the pairs by key to produce counts  
  20. JavaPairRDD counts = ones.reduceByKey(  
  21.   new Function2() {  
  22.     public Integer call(Integer i1, Integer i2) {  
  23.       return i1 + i2;  
  24.     }  
  25. });  
  26.   
  27. counts.saveAsTextFile("hdfs://counts.txt");  
With Java 8, we can write this program in just a few lines: 

Java 8 word count: 
  1. JavaRDD lines = sc.textFile("hdfs://log.txt");  
  2. JavaRDD words =  
  3.     lines.flatMap(line -> Arrays.asList(line.split(" ")));  
  4. JavaPairRDD counts =  
  5.     words.mapToPair(w -> new Tuple2(w, 1))  
  6.          .reduceByKey((x, y) -> x + y);  
  7. counts.saveAsTextFile("hdfs://counts.txt");  

We are very excited to offer this functionality, as it opens up the simple, concise programming style that Scala and Python Spark users are familiar with to a much broader set of developers. 

Availability 
Java 8 lambda support will be available in Apache Spark 1.0, which will be released in early May. Although using this syntax requires Java 8, Apache Spark 1.0 will still support older versions of Java through the old form of the API. Lambda expressions are simply a shorthand for anonymous inner classes, so the same API can be used in any Java version.

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...