- Видео 54
- Просмотров 263 815
Cloud Challengers
Добавлен 14 мар 2024
We provide comprehensive resources tailored for interview preparation in key technology domains like Azure, DataBricks, Python and SQL. Our focus is not just on imparting knowledge, but also on nurturing confidence and practical expertise
Tredence SQL Interview Question - Find events with 3 or more consecutive years for each pid
SQL question recently asked in Tredence interview. Find events with 3 or more consecutive years for each pid
To find out the expected output, I use row_number() function.
--Create table syntax
CREATE TABLE events ( pid INT, year INT )
-- Insert data into the table
INSERT INTO events VALUES (1, 2019), (1, 2020), (1, 2021), (2, 2022), (2, 2021),(3, 2019), (3, 2021), (3, 2022)
For more SQL interview questions. Check out our playlist.
ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX
Contact us:
info@cloudchallengers.com
Follow us on
Instagram : cloudchallengers
Facebook : cloudchallengers
LinkedIn : linkedin.com/company/cloudchallengers
To find out the expected output, I use row_number() function.
--Create table syntax
CREATE TABLE events ( pid INT, year INT )
-- Insert data into the table
INSERT INTO events VALUES (1, 2019), (1, 2020), (1, 2021), (2, 2022), (2, 2021),(3, 2019), (3, 2021), (3, 2022)
For more SQL interview questions. Check out our playlist.
ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX
Contact us:
info@cloudchallengers.com
Follow us on
Instagram : cloudchallengers
Facebook : cloudchallengers
LinkedIn : linkedin.com/company/cloudchallengers
Просмотров: 2 528
Видео
PwC SQL Interview Question - STRING_SPLIT(), CROSS APPLY & CROSS JOIN
Просмотров 3,2 тыс.4 месяца назад
SQL questions recently asked in PwC interview. To find out the expected output, I use STRING_SPLIT(), CROSS APPLY & CROSS JOIN Create table syntax CREATE TABLE testtbl (cola VARCHAR(10)) Insert data into the table INSERT INTO testtbl (cola) VALUES ('1,2'), ('3'), ('4') For more SQL interview questions. Check out our playlist. ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX ...
DBS Bank SQL Interview Question - Find the missing weeks in a table
Просмотров 6 тыс.4 месяца назад
SQL questions recently asked in DBS Bank interview. Find the missing weeks in a table To find out the expected output, I use Recursive CTE. Create table syntax CREATE TABLE sls_tbl (pid INT, sls_dt DATE, sls_amt INT ) Insert data into the table INSERT INTO sls_tbl (pid, sls_dt, sls_amt) VALUES (201, '2024-07-11', 140), (201, '2024-07-18', 160), (201, '2024-07-25', 150), (201, '2024-08-01', 180)...
Infosys SQL Interview Question
Просмотров 45 тыс.4 месяца назад
SQL questions recently asked in Infosys interview. We need to find origin and final destination details. To find out the expected output, I use joins. Create table syntax CREATE TABLE Flights (cust_id INT, flight_id VARCHAR(10), origin VARCHAR(50), destination VARCHAR(50)); Insert data into the table INSERT INTO Flights (cust_id, flight_id, origin, destination) VALUES (1, 'SG1234', 'Delhi', 'Hy...
Genpact Python Interview Questions
Просмотров 2 тыс.5 месяцев назад
Python questions recently asked in Genpact interview. In this video you will learn how to make use of for loop. Contact us: info@cloudchallengers.com Follow us on Instagram : cloudchallengers Facebook : cloudchallengers LinkedIn : linkedin.com/company/cloudchallengers
KPMG SQL Interview Question - Using ROW_NUMMBER( ) & CASE Statement
Просмотров 4,9 тыс.5 месяцев назад
SQL questions recently asked in KPMG interview. We need to find running total on the transactions table. To find out running total we use functions like ROW_NUMBER() and CASE statement. Create table syntax CREATE TABLE transactions_1308 (transaction_id BIGINT, type VARCHAR(50), amount INT,transaction_date DATE) Insert data into the table INSERT INTO transactions_1308 VALUES (53151, 'deposit', 1...
NTT Data Pyspark Interview Question
Просмотров 1,5 тыс.5 месяцев назад
Pyspark Interview questions recently asked in NTT Data interview. We need to add prefix to all the columns in the given dataframe. Lets see how we can achieve this by using Pyspark. Mentioning the dataframe details here data = [(101, 'IT', 1000), (102, 'HR', 900) columns = ["empid", "dept", "salary"] df = spark.createDataFrame(data, columns)] For more Azure Data Bricks interview questions. Chec...
Hero Motocorp SQL Interview Questions using REPLICATE(), RIGHT() and CONCAT() Functions
Просмотров 5 тыс.5 месяцев назад
SQL questions recently asked in Motocorp interview. 1. We need to mask first 12 digits of card number. 2. Need to select employee names with same salary. Create table syntax CREATE TABLE cards (card_number BIGINT) INSERT INTO cards VALUES (1234567812345678),(2345678923456789),(3456789034567890) CREATE TABLE Employee (employee_id INT,ename VARCHAR(50),salary INT) INSERT INTO Employee VALUES (3, ...
GrayMatter Pyspark Interview Question - Get null count of all columns
Просмотров 1,6 тыс.6 месяцев назад
Pyspark Interview questions recently asked in GrayMatter interview. We need to Get null count of all columns in dataframe. Lets see how we can achieve this by using Pyspark. Mentioning the dataframe details here data = [(1, None, 'ab'), (2, 10, None), (None, None, 'cd')] columns = ['col1', 'col2', 'col3'] df = spark.createDataFrame(data, columns) For more Azure Data Bricks interview questions. ...
WIPRO SQL Interview Question - FIRST_VALUE( ) Function
Просмотров 19 тыс.6 месяцев назад
One of the SQL questions recently asked in WIPRO interview. We need to Find department wise minimum salary empname and maximum salary empname . To solve this, we will write a query by using FIRST_VALUE() windows function for first approach and Rownumber() function for second approach. Create table syntax CREATE TABLE emps_tbl (emp_name VARCHAR(50), dept_id INT, salary INT); INSERT INTO emps_tbl...
Persistent Systems SQL Interview Question using LEAST(), GREATEST() and ROW_NUMBER() Functions
Просмотров 4,3 тыс.6 месяцев назад
One of the SQL questions recently asked in Persistent Systems interview. We need to Find unique combination of records in output. To solve this, we will write a query by using least, greatest and Rownumber() functions. Create table syntax CREATE TABLE routes (Origin VARCHAR(50), Destination VARCHAR(50)); INSERT INTO routes VALUES ('Bangalore', 'Chennai'), ('Chennai', 'Bangalore'), ('Pune', 'Che...
GlobalLogic Pyspark Interview Questions
Просмотров 1,5 тыс.6 месяцев назад
Pyspark Interview questions recently asked in GlobalLogic interview. 1. We need to Convert every first letter of word in name to capital letter. 2. Get rows from df1 that are not present in df2 Lets see how we can achieve this by using Pyspark. Mentioning the dataframe details here # Sample data data = [("virat kohli",), ("p v sindhu",)] # Create DataFrame columns = ["name"] df = spark.createDa...
Happiest Minds SQL and Pyspark Interview Question
Просмотров 2,1 тыс.6 месяцев назад
One of the Interview question recently asked in Happiest Minds interview. We need to Get only integer values in the output. Lets see how we can solve this by using try_cast() in SQL and Pyspark. Create table and insert data CREATE TABLE emp_new (employee_id VARCHAR(50) ) INSERT INTO emp_new (employee_id) VALUES ('72657'),('1234'),('Tom'),('8792'),('Sam'),('19998'),('Philip') For more Azure Data...
Affine Technologies SQL Interview question - Find 2nd Wednesday of current month.
Просмотров 2,7 тыс.6 месяцев назад
One of the SQL questions recently asked in Affine Technologies interview. We need to Find 2nd Wednesday of current month. To solve this, we will write a query by using Recursive CTE and Rownumber() function. For more SQL interview questions. Check out our playlist. ruclips.net/p/PLxHEfsUVhEwMqw-nDG2zd3mpXpvY1v9xX&si=wT1GK4FFWAsLRxpX Contact us: info@cloudchallengers.com Follow us on Instagram :...
TCS SQL Interview Question - Find product wise total amount, including products with no sales
Просмотров 7 тыс.7 месяцев назад
One of the SQL questions recently asked in TCS interview. Given us products and transactions table, We need to Find product wise total amount, including products with no sales. Let us create table and insert data create table products (pid int, pname varchar(50), price int) insert into products values (1, 'A', 1000),(2, 'B', 400),(3, 'C', 500); create table transcations (pid int, sold_date DATE...
PwC SQL Interview Question - Find the percentage of Genders
Просмотров 5 тыс.7 месяцев назад
PwC SQL Interview Question - Find the percentage of Genders
Brillio Pyspark Interview Question - Find non repeated employee details
Просмотров 1,6 тыс.7 месяцев назад
Brillio Pyspark Interview Question - Find non repeated employee details
Bayer SQL Interview Question-Dept wise 2nd Highest Salary using DENSE_RANK() and Correlated Subquery
Просмотров 2,9 тыс.7 месяцев назад
Bayer SQL Interview Question-Dept wise 2nd Highest Salary using DENSE_RANK() and Correlated Subquery
R Systems SQL Interview Question - Find the Currency Exchange rate at beginning and ending of month
Просмотров 2,8 тыс.7 месяцев назад
R Systems SQL Interview Question - Find the Currency Exchange rate at beginning and ending of month
Capgemini SQL Interview Question - Find aggregated cost of product
Просмотров 3 тыс.7 месяцев назад
Capgemini SQL Interview Question - Find aggregated cost of product
JLL Pyspark Interview Question - Get Top3 pickup locations
Просмотров 1,6 тыс.7 месяцев назад
JLL Pyspark Interview Question - Get Top3 pickup locations
JLL SQL Interview Question - Get Employee Name whose salary is greater than Manager Salary
Просмотров 2,9 тыс.7 месяцев назад
JLL SQL Interview Question - Get Employee Name whose salary is greater than Manager Salary
Ecolab SQL Interview Question - SUBSTRING() and CHARINDEX() Functions
Просмотров 4,3 тыс.8 месяцев назад
Ecolab SQL Interview Question - SUBSTRING() and CHARINDEX() Functions
Infosys/InxiteOut Data Bricks Interview Question - Pass parameters from ADF to Databricks Notebook.
Просмотров 1,8 тыс.8 месяцев назад
Infosys/InxiteOut Data Bricks Interview Question - Pass parameters from ADF to Databricks Notebook.
Mphasis SQL Interview question - UPDATE scores column values with department wise maximum score
Просмотров 3,9 тыс.8 месяцев назад
Mphasis SQL Interview question - UPDATE scores column values with department wise maximum score
LTIMindtree ADF Interview Question - Run multiple ADF pipelines parallelly
Просмотров 1,3 тыс.8 месяцев назад
LTIMindtree ADF Interview Question - Run multiple ADF pipelines parallelly
IKRUX SQL Interview Question - SQL JOINS, GROUP BY, STRING_AGG() Function
Просмотров 2,9 тыс.8 месяцев назад
IKRUX SQL Interview Question - SQL JOINS, GROUP BY, STRING_AGG() Function
Data Semantics SQL Interview Question - REPLACE and LEN Functions in SQL
Просмотров 2,8 тыс.8 месяцев назад
Data Semantics SQL Interview Question - REPLACE and LEN Functions in SQL
Marlabs ADF Interview question - Send Email notifications using Alerts and Metrics in ADF
Просмотров 4678 месяцев назад
Marlabs ADF Interview question - Send Email notifications using Alerts and Metrics in ADF
WITH sales_data AS ( SELECT dt, sales, LAG(sales, 1, 0) OVER (ORDER BY dt) AS PrevDaySales, CASE WHEN LAG(sales, 1, 0) OVER (ORDER BY dt) = 0 THEN NULL -- To avoid division by zero ELSE (sales - LAG(sales, 1, 0) OVER (ORDER BY dt)) * 100.0 / LAG(sales, 1, 0) OVER (ORDER BY dt) END AS var_pct FROM practiceDB1.inputSales ORDER BY dt ) SELECT dt AS Business_Date, sales, COALESCE(CAST(ROUND(var_pct,0) AS STRING),'') AS `%var` FROM sales_data WHERE COALESCE(CAST(ROUND(var_pct,0) AS STRING),'') NOT LIKE '-%';
with cte as( select sname,marks, ROW_NUMBER() over ( partition by sname order by marks desc)as d_rn from students) select sname,SUM(marks)as total_marks from cte where d_rn in (1,2) group by sname;
Thanks you
Using lag and case: WITH temp_table AS ( SELECT category, brand_name, LAG(category) OVER (ORDER BY category, brand_name) AS prev_category FROM snacks ) SELECT (CASE WHEN category IS NULL THEN prev_category ELSE category END) AS category, brand_name FROM temp_table;
SELECT *, STUFF(card_number, 1, 12, REPLICATE('*', 12)) AS masked_number FROM Cards;
df.filter( col('empid')==df.groupBy('empid').count().filter(col('count')==1).select('empid').collect()[0][0]).show()
U must mistakenly order by empid instead of salary .to get min salary..
With cte_main ( select *, row_number ( ) over(partition by cust_id)rn from flights), Cte_source as ( select cust_id,origin from cte_main where rn=1), Cte_dest as ( select cust_id, destination from.cte_main where rn=2) Select s.cust_id,s.origin,t.destination from cte_source s,cte_dest t where s.cust_id=t.cust_id
select *,sum(case when transaction_type='deposit' then amount else -amount end)over(order by transaction_date rows between unbounded preceding and current row)as rolling_total from transactions;
WITH CTE AS( SELECT *, ROW_NUMBER() OVER(partition by MONTH(DATE),CURRENCY_CODE ORDER BY MONTH(DATE)) DD FROM mall.exchange_rates), J AS ( SELECT *, MIN(DD) OVER(partition by MONTH(DATE),CURRENCY_CODE ORDER BY MONTH(DATE) ) MI, MAX(DD) OVER(partition by MONTH(DATE),CURRENCY_CODE ORDER BY MONTH(DATE) ) MA FROM CTE) SELECT currency_code,date,currency_exchange_rate FROM j WHERE dd=mi or dd=ma order by currency_code;
Alternate Approach --Unpivot Data with PVOT function WITH unpivot_data AS ( SELECT col1, attribute, value FROM practiceDB1.tbl_maxval UNPIVOT ( value FOR attribute IN (col2, col3) ) ) SELECT col1, MAX(value) AS maxval FROM unpivot_data GROUP BY col1 ORDER BY col1;
with cte AS (select salary,count(salary) from Employee group by salary having count(salary) > 1 ) select ename from Employee where salary in (select salary from cte)
Your approach for the solution is great sir !!
with testtbl_cte as( select unnest(string_to_array(cola,',')) as new_col from testtbl), second_part as(select t1.new_col as first,t2.new_col as second from testtbl_cte t1 cross join testtbl_cte t2 where t1.new_col != t2.new_col) select * from second_part where first> second
sir can you tell which topics we need to prepare for round 1 assesment
how can ffind 3 consecutive seat
With cte as(Select * , ISNULL(LAG(ytd_sales) over(order by ytd_sales),0) as Id from sales) select month,ytd_sales, ytd_sales - ID As periodicsales from cte
With cte as(SELECT * from table_a UNION Select * from table_b), cte2 as(Select *, Rank() over (partition by empid order by salary) as rn from cte) select empid,empname,salary from cte2 where rn < 2
WITH GenderCounts AS ( SELECT SUM(CASE WHEN gender = 'Male' THEN 1 ELSE 0 END) AS MaleCount, SUM(CASE WHEN gender = 'Female' THEN 1 ELSE 0 END) AS FemaleCount, COUNT(*) AS TotalCount FROM employee ) SELECT (MaleCount * 100.0) / TotalCount AS MalePercentage, (FemaleCount * 100.0) / TotalCount AS FemalePercentage FROM GenderCounts;
WITH CTE AS( SELECT DAYNAME(ID) DN,DATE(ID) D,TIME(ID) T,EMPID FROM emp_tbl ORDER BY EMPID,ID) , J AS( SELECT *, LAG(T,1) OVER(PARTITION BY D ) win FROM CTE WHERE DN IN ('SATURDAY','SUNDAY')), K AS( SELECT EMPID,D,T,TIMESTAMPDIFF(HOUR,WIN,T) HOURS FROM J) SELECT EMPID,SUM(HOURS) H FROM K GROUP BY EMPID ORDER BY EMPID;
WITH CTE AS( SELECT SLS_DT,adddate(SLS_DT , INTERVAL +7 DAY) week FROM SLS_TBL), J AS( SELECT *, LAG(WEEK,1) OVER(ORDER BY WEEK) N FROM CTE) SELECT N FROM J WHERE SLS_DT!=N;
First we can apply disticnt id and then get a max value
WITH cte AS( SELECT * ROW_NUMBER() OVER(PARTITION BY sname ORDER BY marks DESC) AS rn FROM Student) SELECT sname, max(rn) FROM cte GROUP BY sname
%%sql with cte as ( select gender, count(1) as gender_count from employees_tbl group by gender ) select gender, gender_count, cast((gender_count * 100) / sum(gender_count) over() as int) as gender_percentage FROM cte;
with cte as( select *, lead(destination ,1) over(partition by cust_id order by cust_id) final_destination from flights), l as( select *, row_number() over(partition by cust_id order by cust_id) s from cte) select cust_id,origin,final_destination from l where s=1;
select col1,(length(col2)-length(replace(col2,',','')))+1 as count1 from tbl_cnt;
with cte as (select brand_name, case when brand_name='dairy milk' then 'chocolate' when brand_name='perk' then 'chocolate' when brand_name='eclair' then 'chocolate' when brand_name='good day' then 'biscuits' when brand_name='boost' then 'biscuits' else category end category from brands) select category,brand_name from cte;
mazak chal rha hai???🥲🥲🥲
select sname,marks, case when marks<35 then 'poor' when marks>=35 and marks<=60 then 'good' when marks>60 and marks <=80 then 'vgood' else 'excellent' end as grade from st;
select * from happiness_tbl ORDER BY CASE country WHEN 'India' THEN 0 WHEN 'Sri Lanka' THEN 1 ELSE 2 END ;
( with a as( select *, row_number() over(order by ranking desc) r from happiness_tbl limit 2) select ranking,country from a order by ranking) union select * from happiness_tbl ORDER BY CASE country WHEN 'India' THEN 1 WHEN 'Sri Lanka' THEN 2 ELSE 3 END ;
with cte as( select *, lag(ytd_sales,1) over(order by monthnum) d from sales), j as( select month, ytd_sales,(ytd_sales-d) as periodic from cte order by monthnum) select month,ytd_sales, case when periodic is null then ytd_sales else periodic end as periodicsales from j;
with cte as ( select * from tablea union all select * from tableb), f as( select *, row_number() over(partition by empname) as d from cte) select empid,empname,salary from f where d=1;
WITH CTE AS( SELECT *, COUNT(*) OVER (PARTITION BY ID ORDER BY ID ) F FROM S) SELECT MAX(ID) MAXIMUM_ID FROM CTE WHERE F=1;
select max(emp_id) maxemp from (select emp_id,count(emp_id) from em group by emp_id having count(emp_id)=1)
THIS CAN BE A DYNAMIC APPROACH TOO with cte as (select *,row_number() over(order by seat_id) as rn, seat_id - row_number() over(order by seat_id) as diff from cinema_tbl where free<>0), t2 as ( select *, count(diff) over(partition by diff) as cnt from cte) select seat_id from t2 where cnt>1
select top 1(id) from employee group by id having count(id)=1 order by id desc
Select card_number,lpad(substr(card_number,-4), length (card_number),'*')as New _card_number from table
By using case statement. Select case when to_ char(trunc(sysdate,'mm'),'d') =4 then trunc(sysdate,'mm')+7 else next_day(next_day(trunc(sysdate, 'mm'),'wed'),'wed') end as 2-nd - Wednesday from dual;
Select top 1 id from employees Group by id having count(*) = 1 order by id desc
with cte as (select *,rank() over(partition by dept_id order by salary desc) as rnk from emps_tbl) select dept_id, max(case when rnk =1 then emp_name end) as max_sal_emp, min(case when rnk >1 then emp_name end) as max_sal_emp from cte where rnk<3 group by 1
with cte as (select country from happiness_tbl where ranking>100 order by ranking), cte2 as ( select * from cte union all select country from happiness_tbl where ranking<100), cte3 as( select *,row_number() over() as rn from cte2) select case when rn= 2 then lag(country) over() when rn =1 then lead(country) over() else country end as flag from cte3
Left outer join
with CTE as ( select id , count(*) as count_ from employees group by id having count(*)!>1) select max(id) as max_id from CTE;
with cte1 as( select *, ROW_NUMBER() over(partition by sname order by marks desc) as rnum from students) select cte1.sname,sum(marks) as total_marks from cte1 where cte1.rnum<=2 group by cte1.sname;
;with cte_val as( select concat_ws('_',currency_code,year(date),month(date)) as currency_code, min(currency_exchange_rate) over(partition by currency_code,date ) as current_start, max(currency_exchange_rate) over(partition by currency_code,date ) as currency_end from exchange_rates) select currency_code,min(current_start) as start_val,max(currency_end) as end_val from cte_val --where currency_code like 'USD%' group by currency_code
With cte as ( Select Dateadd(week,1,Sls_dt) as sls_dt from sls_tbl ) Select * from cte Where sls_dt not in (Select sls_dt from sls_tbl) and sls_dt < (Select Max(sls_dt) from sls_tbl)
WITH DeptMaxScores AS ( SELECT dept, MAX(scores) AS max_score FROM empdept_tbl GROUP BY dept ) UPDATE empdept_tbl SET scores = d.max_score FROM empdept_tbl e JOIN DeptMaxScores d ON e.dept = d.dept;
with cte as( select *, row_number() over(partition by id order by id) rn from employees ) select id from cte group by 1 having count(*) =1 order by 1 desc limit 1 ;
select e1.ename from employees_tbl e1 right join employees_tbl e2 on e1.managerid = e2.empid where e1.salary > e2.salary
select * from tablea UNION select * from tableb where empid!=2