How to load unique records in one file and duplicate records in second file without losing records?

Поделиться
HTML-код
  • Опубликовано: 24 окт 2024

Комментарии • 9

  • @kuttybagavathi6467
    @kuttybagavathi6467 2 года назад

    I don't know why ppl not subscribe your video.... Your contents are really great bro 👍👍

  • @niharikajain3059
    @niharikajain3059 Год назад +2

    I think we dont need joiner in this scenario,we can just give count from aggregator to route data

    • @GeetLifeThoughts
      @GeetLifeThoughts 11 месяцев назад

      I think , In that way we will loose the records in the duplicate file, it will give only 1 time

  • @akhileshsoni1171
    @akhileshsoni1171 2 года назад +2

    Sir if we take only expression, then how can we generate it? please suggest me. Second this if we consider this only mapping then i have a doubt why we have taken joiner here.

  • @GeetLifeThoughts
    @GeetLifeThoughts 11 месяцев назад

    My doubt is , I can get the same result if in the router I can put count=1 , then we don’t need the expr trans and we can join the sorter and the aggregator . I am getting the same result. Please can you help me to understand if there is any problem if we remove the expr ??

  • @Nirmalkumar-zr6qe
    @Nirmalkumar-zr6qe Год назад

    If a source has 100 rows and last 3 rows needs to be loaded in a separate target, how it can be executed ?

    • @sabarid5798
      @sabarid5798 Год назад

      Use sequence generator first and then rank (Top 3)

  • @gaurishetye1822
    @gaurishetye1822 3 года назад

    If we using group by condition on the newvalue column doesn't it create a single Index for each newvalue column. At the end of aggregator output should be a 1 , b1, c 3 ????and not A1,b1,c1, C2,c3