How do I get the matching values ​​in PIG without using UDF?

Consider them as my input files,

 Input 1: (File 1)
 12,23,14,15,9
 1,2,3,4,5
 34,17,8
 .
 .

 Input 2: (File 2)
 12 Twelve
 23 TwentyThree
 34 ThirtyFour
 .
 .

      

I will be reading each line from the "Input 1" file using my PIG script and I would like to get the results as shown below based on the "Input 2" file.

 Output:
 Twelve,TwentyThree,Fourteen,Fifteen,Nine
 One,Two,Three,Four,Five
 .
 .

      

Can this be achieved without a UDF? Please let me know your suggestions.

Thank you Advance!

+3


source to share


2 answers


This violates your "No UDF" criteria, but the UDF is built in, so I suspect that will be enough.

Query:

data1 = LOAD 'file1' AS (val:chararray);
data2 = LOAD 'file2' AS (num:chararray, desc:chararray);

A = RANK data1;  /* creates row number*/
B = FOREACH A GENERATE rank_data1, FLATTEN(TOKENIZE(val, ',')) AS num;
C = RANK B;  /* used to keep tuple elements sorted in bag*/
D = JOIN C BY num, data2 BY num;
E = FOREACH D GENERATE C::rank_data1 AS rank_1:long
                     , C::rank_B AS rank_2:long
                     , data2::desc AS description;

grpd = GROUP E BY rank_1;
F = FOREACH grpd {
      sorted = ORDER E BY rank_2;
      GENERATE sorted;
    };

X = FOREACH F GENERATE FLATTEN(BagToTuple(sorted.description));
DUMP X;

      



Output:

(Twelve,TwentyThree,Fourteen,Fifteen,Nine)
(One,Two,Three,Four,Five)
(ThirtyFour,Seventeen,Eight)

      

+4


source


Here is the solution for the hive:



--Load the data into Hive
CREATE TABLE file1 (
  line array<string>
)
ROW FORMAT DELIMITED 
COLLECTION ITEMS TERMINATED BY ',';
LOAD DATA INPATH '/tmp/test2/file1' OVERWRITE INTO TABLE  file1;

CREATE TABLE file2 (
  name string,
  value string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ';
LOAD DATA INPATH '/tmp/test2/file2' OVERWRITE INTO TABLE  file2;

--explode the rows from the first table and create a newid to use for correlation
CREATE TABLE file1_exploded 
AS
WITH tmp 
AS
(SELECT RAND() newid, line from file1)
SELECT newid, item FROM tmp 
LATERAL VIEW EXPLODE (line) a AS item;

--apply substitions using the second table, then join lines back together
SELECT CONCAT_WS(',', COLLECT_LIST(value))
FROM 
file1_exploded
JOIN file2 ON item = name
GROUP BY newid;

      

0


source







All Articles