- Видео 54
- Просмотров 211 688
Cloud Challengers
Добавлен 14 мар 2024
We provide comprehensive resources tailored for interview preparation in key technology domains like Azure, DataBricks, Python and SQL. Our focus is not just on imparting knowledge, but also on nurturing confidence and practical expertise
Tredence SQL Interview Question - Find events with 3 or more consecutive years for each pid
SQL question recently asked in Tredence interview. Find events with 3 or more consecutive years for each pid
To find out the expected output, I use row_number() function.
--Create table syntax
CREATE TABLE events ( pid INT, year INT )
-- Insert data into the table
INSERT INTO events VALUES (1, 2019), (1, 2020), (1, 2021), (2, 2022), (2, 2021),(3, 2019), (3, 2021), (3, 2022)
For more SQL interview questions. Check out our playlist.
ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX
Contact us:
info@cloudchallengers.com
Follow us on
Instagram : cloudchallengers
Facebook : cloudchallengers
LinkedIn : linkedin.com/company/cloudchallengers
To find out the expected output, I use row_number() function.
--Create table syntax
CREATE TABLE events ( pid INT, year INT )
-- Insert data into the table
INSERT INTO events VALUES (1, 2019), (1, 2020), (1, 2021), (2, 2022), (2, 2021),(3, 2019), (3, 2021), (3, 2022)
For more SQL interview questions. Check out our playlist.
ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX
Contact us:
info@cloudchallengers.com
Follow us on
Instagram : cloudchallengers
Facebook : cloudchallengers
LinkedIn : linkedin.com/company/cloudchallengers
Просмотров: 1 549
Видео
PwC SQL Interview Question - STRING_SPLIT(), CROSS APPLY & CROSS JOIN
Просмотров 2,3 тыс.2 месяца назад
SQL questions recently asked in PwC interview. To find out the expected output, I use STRING_SPLIT(), CROSS APPLY & CROSS JOIN Create table syntax CREATE TABLE testtbl (cola VARCHAR(10)) Insert data into the table INSERT INTO testtbl (cola) VALUES ('1,2'), ('3'), ('4') For more SQL interview questions. Check out our playlist. ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX ...
DBS Bank SQL Interview Question - Find the missing weeks in a table
Просмотров 4,1 тыс.2 месяца назад
SQL questions recently asked in DBS Bank interview. Find the missing weeks in a table To find out the expected output, I use Recursive CTE. Create table syntax CREATE TABLE sls_tbl (pid INT, sls_dt DATE, sls_amt INT ) Insert data into the table INSERT INTO sls_tbl (pid, sls_dt, sls_amt) VALUES (201, '2024-07-11', 140), (201, '2024-07-18', 160), (201, '2024-07-25', 150), (201, '2024-08-01', 180)...
Infosys SQL Interview Question
Просмотров 35 тыс.2 месяца назад
SQL questions recently asked in Infosys interview. We need to find origin and final destination details. To find out the expected output, I use joins. Create table syntax CREATE TABLE Flights (cust_id INT, flight_id VARCHAR(10), origin VARCHAR(50), destination VARCHAR(50)); Insert data into the table INSERT INTO Flights (cust_id, flight_id, origin, destination) VALUES (1, 'SG1234', 'Delhi', 'Hy...
Genpact Python Interview Questions
Просмотров 1,9 тыс.3 месяца назад
Python questions recently asked in Genpact interview. In this video you will learn how to make use of for loop. Contact us: info@cloudchallengers.com Follow us on Instagram : cloudchallengers Facebook : cloudchallengers LinkedIn : linkedin.com/company/cloudchallengers
KPMG SQL Interview Question - Using ROW_NUMMBER( ) & CASE Statement
Просмотров 3,9 тыс.3 месяца назад
SQL questions recently asked in KPMG interview. We need to find running total on the transactions table. To find out running total we use functions like ROW_NUMBER() and CASE statement. Create table syntax CREATE TABLE transactions_1308 (transaction_id BIGINT, type VARCHAR(50), amount INT,transaction_date DATE) Insert data into the table INSERT INTO transactions_1308 VALUES (53151, 'deposit', 1...
NTT Data Pyspark Interview Question
Просмотров 1 тыс.3 месяца назад
Pyspark Interview questions recently asked in NTT Data interview. We need to add prefix to all the columns in the given dataframe. Lets see how we can achieve this by using Pyspark. Mentioning the dataframe details here data = [(101, 'IT', 1000), (102, 'HR', 900) columns = ["empid", "dept", "salary"] df = spark.createDataFrame(data, columns)] For more Azure Data Bricks interview questions. Chec...
Hero Motocorp SQL Interview Questions using REPLICATE(), RIGHT() and CONCAT() Functions
Просмотров 4,4 тыс.3 месяца назад
SQL questions recently asked in Motocorp interview. 1. We need to mask first 12 digits of card number. 2. Need to select employee names with same salary. Create table syntax CREATE TABLE cards (card_number BIGINT) INSERT INTO cards VALUES (1234567812345678),(2345678923456789),(3456789034567890) CREATE TABLE Employee (employee_id INT,ename VARCHAR(50),salary INT) INSERT INTO Employee VALUES (3, ...
GrayMatter Pyspark Interview Question - Get null count of all columns
Просмотров 1,1 тыс.3 месяца назад
Pyspark Interview questions recently asked in GrayMatter interview. We need to Get null count of all columns in dataframe. Lets see how we can achieve this by using Pyspark. Mentioning the dataframe details here data = [(1, None, 'ab'), (2, 10, None), (None, None, 'cd')] columns = ['col1', 'col2', 'col3'] df = spark.createDataFrame(data, columns) For more Azure Data Bricks interview questions. ...
WIPRO SQL Interview Question - FIRST_VALUE( ) Function
Просмотров 17 тыс.3 месяца назад
One of the SQL questions recently asked in WIPRO interview. We need to Find department wise minimum salary empname and maximum salary empname . To solve this, we will write a query by using FIRST_VALUE() windows function for first approach and Rownumber() function for second approach. Create table syntax CREATE TABLE emps_tbl (emp_name VARCHAR(50), dept_id INT, salary INT); INSERT INTO emps_tbl...
Persistent Systems SQL Interview Question using LEAST(), GREATEST() and ROW_NUMBER() Functions
Просмотров 3,5 тыс.4 месяца назад
One of the SQL questions recently asked in Persistent Systems interview. We need to Find unique combination of records in output. To solve this, we will write a query by using least, greatest and Rownumber() functions. Create table syntax CREATE TABLE routes (Origin VARCHAR(50), Destination VARCHAR(50)); INSERT INTO routes VALUES ('Bangalore', 'Chennai'), ('Chennai', 'Bangalore'), ('Pune', 'Che...
GlobalLogic Pyspark Interview Questions
Просмотров 1 тыс.4 месяца назад
Pyspark Interview questions recently asked in GlobalLogic interview. 1. We need to Convert every first letter of word in name to capital letter. 2. Get rows from df1 that are not present in df2 Lets see how we can achieve this by using Pyspark. Mentioning the dataframe details here # Sample data data = [("virat kohli",), ("p v sindhu",)] # Create DataFrame columns = ["name"] df = spark.createDa...
Happiest Minds SQL and Pyspark Interview Question
Просмотров 1,5 тыс.4 месяца назад
One of the Interview question recently asked in Happiest Minds interview. We need to Get only integer values in the output. Lets see how we can solve this by using try_cast() in SQL and Pyspark. Create table and insert data CREATE TABLE emp_new (employee_id VARCHAR(50) ) INSERT INTO emp_new (employee_id) VALUES ('72657'),('1234'),('Tom'),('8792'),('Sam'),('19998'),('Philip') For more Azure Data...
Affine Technologies SQL Interview question - Find 2nd Wednesday of current month.
Просмотров 2,2 тыс.4 месяца назад
One of the SQL questions recently asked in Affine Technologies interview. We need to Find 2nd Wednesday of current month. To solve this, we will write a query by using Recursive CTE and Rownumber() function. For more SQL interview questions. Check out our playlist. ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX Contact us: info@cloudchallengers.com Follow us on Instagram :...
TCS SQL Interview Question - Find product wise total amount, including products with no sales
Просмотров 6 тыс.4 месяца назад
One of the SQL questions recently asked in TCS interview. Given us products and transactions table, We need to Find product wise total amount, including products with no sales. Let us create table and insert data create table products (pid int, pname varchar(50), price int) insert into products values (1, 'A', 1000),(2, 'B', 400),(3, 'C', 500); create table transcations (pid int, sold_date DATE...
PwC SQL Interview Question - Find the percentage of Genders
Просмотров 4,5 тыс.5 месяцев назад
PwC SQL Interview Question - Find the percentage of Genders
Brillio Pyspark Interview Question - Find non repeated employee details
Просмотров 1 тыс.5 месяцев назад
Brillio Pyspark Interview Question - Find non repeated employee details
Bayer SQL Interview Question-Dept wise 2nd Highest Salary using DENSE_RANK() and Correlated Subquery
Просмотров 2,5 тыс.5 месяцев назад
Bayer SQL Interview Question-Dept wise 2nd Highest Salary using DENSE_RANK() and Correlated Subquery
R Systems SQL Interview Question - Find the Currency Exchange rate at beginning and ending of month
Просмотров 2,3 тыс.5 месяцев назад
R Systems SQL Interview Question - Find the Currency Exchange rate at beginning and ending of month
Capgemini SQL Interview Question - Find aggregated cost of product
Просмотров 2,5 тыс.5 месяцев назад
Capgemini SQL Interview Question - Find aggregated cost of product
JLL Pyspark Interview Question - Get Top3 pickup locations
Просмотров 1 тыс.5 месяцев назад
JLL Pyspark Interview Question - Get Top3 pickup locations
JLL SQL Interview Question - Get Employee Name whose salary is greater than Manager Salary
Просмотров 2,5 тыс.5 месяцев назад
JLL SQL Interview Question - Get Employee Name whose salary is greater than Manager Salary
Ecolab SQL Interview Question - SUBSTRING() and CHARINDEX() Functions
Просмотров 3,7 тыс.5 месяцев назад
Ecolab SQL Interview Question - SUBSTRING() and CHARINDEX() Functions
Infosys/InxiteOut Data Bricks Interview Question - Pass parameters from ADF to Databricks Notebook.
Просмотров 1,1 тыс.5 месяцев назад
Infosys/InxiteOut Data Bricks Interview Question - Pass parameters from ADF to Databricks Notebook.
Mphasis SQL Interview question - UPDATE scores column values with department wise maximum score
Просмотров 3,2 тыс.5 месяцев назад
Mphasis SQL Interview question - UPDATE scores column values with department wise maximum score
LTIMindtree ADF Interview Question - Run multiple ADF pipelines parallelly
Просмотров 8916 месяцев назад
LTIMindtree ADF Interview Question - Run multiple ADF pipelines parallelly
IKRUX SQL Interview Question - SQL JOINS, GROUP BY, STRING_AGG() Function
Просмотров 2,4 тыс.6 месяцев назад
IKRUX SQL Interview Question - SQL JOINS, GROUP BY, STRING_AGG() Function
Data Semantics SQL Interview Question - REPLACE and LEN Functions in SQL
Просмотров 2,3 тыс.6 месяцев назад
Data Semantics SQL Interview Question - REPLACE and LEN Functions in SQL
Marlabs ADF Interview question - Send Email notifications using Alerts and Metrics in ADF
Просмотров 4036 месяцев назад
Marlabs ADF Interview question - Send Email notifications using Alerts and Metrics in ADF
Hows ikrux work culture?
WITH CTE AS ( SELECT *, CASE WHEN type = 'deposit' THEN amount ELSE -amount END AS net_amt FROM transactions_1308 ) SELECT transaction_id, type, amount, transaction_date, SUM(net_amt) OVER(ORDER BY transaction_date ROWS UNBOUNDED PRECEDING) AS running_total FROM CTE;
ques1: SELECT CONCAT(REGEXP_REPLACE(LEFT(card_number,12),'[0-9]','*'),'',RIGHT(card_number,4)) FROM cards; ques2: SELECT ename FROM Employee WHERE salary IN (SELECT salary FROM Employee GROUP BY salary HAVING COUNT(salary) > 1);
WITH CTE AS (SELECT *, FIRST_VALUE(emp_name) OVER(PARTITION BY dept_id ORDER BY salary rows between unbounded preceding and unbounded following) as min_sal_empname, LAST_VALUE(emp_name) OVER(PARTITION BY dept_id ORDER BY salary rows between unbounded preceding and unbounded following) as max_sal_empname FROM emps_tbl) SELECT DISTINCT dept_id, min_sal_empname, max_sal_empname FROM CTE;
With recursive mydate(dt) as ( Select '2024-07-11' union all Select dt + interval 1 day From mydate Where dt<'2024-08-29' ) Select m.dt from mydate m left join sls_tbl s on m.dt = sls_dt Where s.sls_amt is null;
My approach WITH RECURSIVE cte AS (SELECT DATE(CONCAT(YEAR(CURRENT_DATE),'-', MONTH(CURRENT_DATE),'-',01)) as date UNION ALL SELECT date + INTERVAL 1 DAY FROM cte WHERE date < CURRENT_DATE()) SELECT date FROM cte WHERE WEEKDAY(date) = 2 ORDER BY date LIMIT 1;
with r_cte as ( select min(sls_dt) as startdate ,max(sls_dt) as enddate from sls_tbl union all select dateadd(week,1, startdate) as weeks , enddate from r_cte where startdate < enddate) select r.startdate from r_cte as r left join sls_tbl as s on r.startdate = s.sls_dt where s.sls_dt is null
select x.cust_id, max(case when source=1 then origin end) as SRC, max(case when Target=1 then destination end) as TGT from ( select cust_id,origin, ROW_NUMBER()over(partition by cust_id order by origin) as Source, destination, ROW_NUMBER()over(partition by cust_id order by destination desc) as Target from Flights)X group by x.cust_id
with cte as( select dept_id,min(salary) as min,max(salary) as max from emps_tbl group by dept_id) select a.dept_id,max(emp_name) as min_name,min(emp_name) as max_name from emps_tbl a inner join cte b on a.dept_id=b.dept_id group by a.dept_id
SELECT SUM(CASE WHEN gender = 'Male' THEN 1 ELSE 0 END) * 100/ (SELECT COUNT(eid) FROM employees_tbl) AS male_perc, SUM(CASE WHEN gender = 'Female' THEN 1 ELSE 0 END) * 100/ (SELECT COUNT(eid) FROM employees_tbl) AS fem_perc FROM employees_tbl;
SELECT *, SUM(production_cost) OVER(PARTITION BY dt,brand) AS agg_cost FROM prd_tbl;
WITH CTE AS ( SELECT *, (CASE WHEN TYPE='DEPOSIT' THEN 1 ELSE -1 END)*AMOUNT AS NEW_AMOUNT FROM TRANSACTIONS_1308 ) SELECT CTE.TRANSACTION_ID, CTE.TYPE, CTE.TRANSACTION_DATE, CTE.AMOUNT, SUM(NEW_AMOUNT) OVER (ORDER BY TRANSACTION_DATE ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS RUNNING_TOTAL FROM CTE;
SELECT SUBSTRING_INDEX(email,'@',-1) AS domain FROM customer_tbl;
WITH CTE AS ( SELECT PID, YEAR, YEAR-ROW_NUMBER () OVER (PARTITION BY PID ORDER BY YEAR) AS DIFF FROM EVENTSS ) SELECT DISTINCT PID FROM ( SELECT PID, COUNT(*) OVER (PARTITION BY PID, DIFF) AS CNT FROM CTE ) CTE_1 WHERE CNT>2;
SELECT deptname, GROUP_CONCAT(empname) AS empname FROM (SELECT e.*, d.deptname, DENSE_RANK() OVER(PARTITION BY d.deptname ORDER BY e.salary DESC) AS rnk FROM emp e JOIN dept d ON d.deptid = e.deptid) AS subquery WHERE rnk = 1 GROUP BY deptname;
with cte as ( select count(gender) as total_person from employees_tbl),cte2 as( select count(*) as male_count from employees_tbl where gender='Male' ),cte3 as( select count(*) as female_count from employees_tbl where gender='Female' ) select ((cte2.male_count/cte.total_person)*100) as male_per,((cte3.female_count/cte.total_person)*100) as female_per from cte,cte2,cte3 ;
SELECT * FROM (SELECT dt, sales, COALESCE((sales - LAG(sales) OVER(ORDER BY dt)) *100 / LAG(sales) OVER(ORDER BY dt),0) as varpercent FROM salesvar) AS subquery wHERE varpercent >=0;
select * from tablea Union Select * from tableb where empid=3
l1=['A3','B5','C10'] for i in l1: first=str(i[0]) second=(i[1:]) print(first*int(second))
with cte as (SELECT pid, year,LAG(year,1) OVER(partition by pid) prev_, LEAD(year,1) OVER(partition by pid) next_ FROM events) SELECT pid from cte where next_=year+1 and prev_=year-1
from pyspark.sql.functions import col from pyspark.sql.types import IntegerType e_df1=emp_df.select(col("employee_id").cast(IntegerType())).filter(col("employee_id").isNotNull()) e_df1.show()
with cte as ( select * , max(scores) over(partition by dept) as max_score from empdept_tbl order by eid ) update empdept_tbl set scores = cte.max_score from cte where empdept_tbl.eid = cte.eid; select * from empdept_tbl
WITH weekdays AS (SELECT *,dayofweek(id) as dow FROM emp_tbl WHERE dayofweek(id) IN (1,7)) ,minutes AS (SELECT empid, id, LEAD(id) OVER(PARTITION BY empid,day(id) ORDER BY id), timestampdiff(minute,id,LEAD(id) OVER(PARTITION BY empid,day(id) ORDER BY id)) AS minutes FROM weekdays ORDER BY empid,id) SELECT empid, ROUND(SUM(minutes)/60,2) AS total_weekend_working_hours FROM minutes GROUP BY empid ORDER BY total_weekend_working_hours;
with cte_null AS (SELECT *, ROW_NUMBER() over(order by (select null)) AS rn FROM brands) SELECT first_value(category) OVER(PARTITION BY cn ORDER BY rn) AS category, brand_name FROM (SELECT *, COUNT(category) OVER(ORDER BY rn) AS cn FROM cte_null) category_cnt;
with cte as ( select * from emp join dept on emp.deptid = dept.deptid ), cte2 as ( select deptname, max(salary) from cte group by deptname ) select cte2.deptname, cte.empname from cte2 join cte on cte2.max = cte.salary order by deptname
thanks for this videos, can you explain what is the use case for such split with real time scenarios?
select dt, sales, var from ( select * ,lag(sales,1,0) over(order by dt) as v ,case when (sales - lag(sales) over(order by dt)) > 0 then round(((sales - lag(sales) over(order by dt))*100) / lag(sales) over(order by dt),2) end as var from salesvar_tbl ) as key where v < sales
I did it without cte select month , ytd_sales , ytd_sales - lag(ytd_sales,1,0) over() as rn from sales
SELECT seat_id FROM (SELECT seat_id, free, LAG(free) OVER(ORDER BY seat_id) AS previous_seat, LEAD(free) OVER(ORDER BY seat_id) AS next_seat FROM cinema_tbl) as seats WHERE free = 1 AND (previous_seat = 1 OR next_seat = 1) ORDER BY seat_id;
from pyspark.sql.functions import col df1=df.groupBy(col("empid")).count().filter(col("count")<=1).select(col("empid")) display(df1)
my approach- with dept_sal as ( SELECT *, max(salary) over(partition by dept_id) as max_dept_sal, min(salary) over(partition by dept_id) as min_dept_sal FROM salaries ) select dept_id, max(case when salary= max_dept_sal then name end) as max_sal_empname, max(case when salary= min_dept_sal then name end) as min_sal_empname from dept_sal group by 1
Loved this problem
nice question sir please upload this type of declare type question
SELECT month, ytd_sales, COALESCE(ytd_sales - LAG(ytd_sales) OVER(ORDER BY monthnum),ytd_sales) AS periodic_sales FROM sales;
SELECT MAX(id) as max_id FROM employees GROUP BY id HAVING COUNT(id) = 1 ORDER BY id DESC LIMIT 1;
My Solution: with cte as( Select *, coalesce(lag(year) Over(partition by pid order by year),year-1) as prev_row from events1), cte2 as( select *,year-prev_row as diff from cte where (year-prev_row)=1) select pid,count(diff) from cte2 group by pid having count(diff)>=3
SELECT sname, SUM(marks) AS Totalmarks FROM (SELECT *, DENSE_RANK() OVER(PARTITION BY sname ORDER BY marks DESC,sid) as rnk FROM students) WHERE rnk <=2 GROUP BY sname;
select ROUND(SUM(case when gender = 'Male' then 1 else 0 end) * 100/(select COUNT(gender) from employees_tbl), 0) as Male_Percent, ROUND(SUM(case when gender = 'Female' then 1 else 0 end) * 100/(select COUNT(gender) from employees_tbl), 0) as Female_Percent from employees_tbl
with low as (select dept_id,emp_name as min_sal_emp,row_number() over(partition by dept_id order by salary asc) as rk,row_number() over(partition by dept_id order by salary asc) as r from emps_tbl ), high as (select dept_id,emp_name as max_sal_emp,row_number() over(partition by dept_id order by salary desc) as dk,row_number() over(partition by dept_id order by salary desc) as rr from emps_tbl ) select l.dept_id,l.min_sal_emp,h.max_sal_emp from low l join high h on l.dept_id=h.dept_id where (rk=1 or r=1) and (dk=1 or rr=1)
SELECT * , SUM(production_cost) OVER(PARTITION BY dt , brand ) AS agg_cost FROM prd_tbl
Very easy to understand the concepts thank you:)
Can you please help me with infosys SQL recruitment process my resume does not selected everytime
Hello sir , is below approach correct ? with cte as (select *,dense_rank() over (partition by deptid order by salary )as rn from ep ) select string_agg(cte.empname,',') as emp_name,dp.deptname from cte join dept dp on cte.deptid=dp.deptid where rn>1 group by dp.deptname
my approach- with cte as (select *, (nxtyr-yearr) as yrdiff from ( select *, lead(yearr,1,yearr+1)over(partition by pid order by yearr) as nxtyr from eventss) as a) select pid from cte group by pid having count(yrdiff)>=3 and yrdiff=1;
Bro put video today for mphasis bro
My sql Solution select *,substring_index(email,"@",-1) from customer_tbl;
; with cte as ( SELECT *,year-row_number() over ( partition by pid order by (select 1)) as Grp FROM #events ) select pid--, count(1) as count from cte group by pid,Grp having count(1) > 2
SELECT * FROM #sls_tbl ; with cte as ( select min(sls_dt) as StatrDate, Max(sls_dt) as EndDate from #sls_tbl ) , cte2 as ( select * , convert(datetime,StatrDate) + Value as Weekd from cte cross apply generate_series(0,datediff(d,StatrDate,EndDate),7) ) select a.Weekd as Sls_dt from cte2 a left join #sls_tbl b on a.Weekd = b.sls_dt where b.sls_dt is null
SELECT * FROM #Employee WHERE salary IN ( SELECT salary FROM #Employee GROUP BY salary HAVING COUNT(1) > 1)
select distinct case when Origin < Destination then Origin else destination end as Origin , case when Origin > Destination then Origin else destination end as destination from #routes