java.lang.Object
org.apache.hadoop.mapreduce.InputFormat<K,V>
org.apache.hadoop.mapreduce.lib.input.FileInputFormat<K,V>
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<Long,String>
org.apache.sedona.core.formatMapper.shapefileParser.fieldname.FieldnameInputFormat

public class FieldnameInputFormat extends org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<Long,String>
  • Nested Class Summary

    Nested classes/interfaces inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat

    org.apache.hadoop.mapreduce.lib.input.FileInputFormat.Counter
  • Field Summary

    Fields inherited from class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat

    SPLIT_MINSIZE_PERNODE, SPLIT_MINSIZE_PERRACK

    Fields inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat

    DEFAULT_LIST_STATUS_NUM_THREADS, INPUT_DIR, INPUT_DIR_NONRECURSIVE_IGNORE_SUBDIRS, INPUT_DIR_RECURSIVE, LIST_STATUS_NUM_THREADS, NUM_INPUT_FILES, PATHFILTER_CLASS, SPLIT_MAXSIZE, SPLIT_MINSIZE
  • Constructor Summary

    Constructors
    Constructor
    Description
     
  • Method Summary

    Modifier and Type
    Method
    Description
    org.apache.hadoop.mapreduce.RecordReader<Long,String>
    createRecordReader(org.apache.hadoop.mapreduce.InputSplit inputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext taskAttemptContext)
     
    List<org.apache.hadoop.mapreduce.InputSplit>
    getSplits(org.apache.hadoop.mapreduce.JobContext job)
    get and combine all splits of .shp files
    protected boolean
    isSplitable(org.apache.hadoop.mapreduce.JobContext context, org.apache.hadoop.fs.Path file)
    enforce isSplitable to be false so that super.getSplits() combine all files as one split.

    Methods inherited from class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat

    createPool, createPool, getFileBlockLocations, setMaxSplitSize, setMinSplitSizeNode, setMinSplitSizeRack

    Methods inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat

    addInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSize, shrinkStatus

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Constructor Details

    • FieldnameInputFormat

      public FieldnameInputFormat()
  • Method Details

    • createRecordReader

      public org.apache.hadoop.mapreduce.RecordReader<Long,String> createRecordReader(org.apache.hadoop.mapreduce.InputSplit inputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext taskAttemptContext) throws IOException
      Specified by:
      createRecordReader in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<Long,String>
      Throws:
      IOException
    • isSplitable

      protected boolean isSplitable(org.apache.hadoop.mapreduce.JobContext context, org.apache.hadoop.fs.Path file)
      enforce isSplitable to be false so that super.getSplits() combine all files as one split.
      Overrides:
      isSplitable in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<Long,String>
      Parameters:
      context -
      file -
      Returns:
    • getSplits

      public List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext job) throws IOException
      get and combine all splits of .shp files
      Overrides:
      getSplits in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<Long,String>
      Parameters:
      job -
      Returns:
      Throws:
      IOException