Best video ..it works ..Most of the setup has issue with version mismatch: the way it showed maven spark repository guidance was amazing .. thanks a lot
Great tutorial..... I would really like a tutorial to make one jar in intellij with sbt and run it in cluster mode on VM machines. I tried many forums and channels but making it with sbt is quite tricky specially with 1.3.10. If you can make a video on that, it would be a big help. Thanks in advance!!!
@@AzarudeenShahul the Error is "value read is not a member of org.apache.spark.SparkContext'' and i made sure to put in build.sbt these dependencies : libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.8" libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.8" the scala Version is "2.11.12" .... Thanks in advance
Unrecognized VM option 'CMSClassUnloadingEnabled' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.
@@AzarudeenShahul yes...After installing, when I am trying to execute the Spark Program in intellj, It's showing me the above errors! I have tried setting the environment variables also but unable to fix it!!
unable to locate Hadoop binaries , pls try the below step --> Download winutils.exe from public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe. --> SetUp your HADOOP_HOME environment variable on the OS level or programmatically: using below command System.setProperty("hadoop.home.dir", "full path to the folder with winutils"); let me know if it works or not
Thanks a lot. That was a great video and guidance.After a long search could locate this source for my first IntelliJ IDE scala programme. I encountered error on Spark context and changed the programme a bit (as below) and got the result after downgrading to SPARK 2.4.7 ,SBT -1.4.1 ,SCALA 2.11.12 on IntelliJ 2021.1 x64 Code: import org.apache.spark.{SparkConf,SparkContext} import org.apache.spark.rdd.RDD object firstdemo { def main(args :Array[String]):Unit ={ //val conf = new SparkConf().setMaster("local[*]").setAppName("FirstDemo") //val sc = SparkContext(conf) val sc = new SparkContext("local[*]",appName = "FirstDemo") val rdd =sc.parallelize(Array(5,10,30)) rdd.reduce(_+_) println(rdd.reduce(_+_)) } }
Please help here, I did exactly same as mentioned in video, but getting error Error:(2, 8) ScalaDe is already defined as object ScalaDe object ScalaDe { Code:- import org.apache.spark.{SparkConf,SparkContext} object ScalaDe { def main(args: Array[String]): Unit = { val conf = new SparkConf().setMaster("local[*]").setAppName("scalade") val sc = new SparkContext(conf) val rdd = sc.parallelize(Array(10,20,30)) println(rdd.reduce(_+_)) } }
Best video ..it works ..Most of the setup has issue with version mismatch: the way it showed maven spark repository guidance was amazing .. thanks a lot
Thanks for your support 😊
The only tutorial which actually worked for me in MAC...Thanks a lot...
Thanks a lot bro, that heap size error was going to kill me! saved lots of time.
Thanks for your support :)
very good tutorial for setup spark in intellij
Hey Azar, Good tutorial, this is exact what i am looking for. Thanks.
Thank you so much Sir.. Great help sir ... Garib aadmi ka bhala ho gaya
Thanks for all your support 😊
For me the "Run" is not coming . Stuck here at this point .
thank you very much, that was very helpful....well explained
Thanks a lot Sir! You explained it the best way possible!!!
Such a neat explanation, Thanks sir
You are most welcome
You are amazing. This video helped me so much.
Thanks for your support 🙂
How to write template for main method
Is there any shortcut key or do we have to define anywhere?
Great tutorial.....
I would really like a tutorial to make one jar in intellij with sbt and run it in cluster mode on VM machines.
I tried many forums and channels but making it with sbt is quite tricky specially with 1.3.10.
If you can make a video on that, it would be a big help.
Thanks in advance!!!
i didn't find a problem setting a sparcontext on intellij idea ide , but i have a problem setting a sparsession and SQLspark ...
can u help please ?
Hope u followed the same steps in video.. can you send me the error message and how u try to achieve.. will try to help you.
@@AzarudeenShahul the Error is "value read is not a member of org.apache.spark.SparkContext''
and i made sure to put in build.sbt these dependencies :
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.8"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.8"
the scala Version is "2.11.12" ....
Thanks in advance
Please provide the code on how u initiating spark session
Hi,
Do you have videos for creating a jar file and deploy on to spark cluster?
Hi I am getting exception in thread main could not parse local
Could you please share one with maven?
Could you pls create a spark scala series to get started with for beginners. TIA.
Sure, in this session, we will try to cover all spark scala topic with demo..
Please do share with yours friends :)
@@AzarudeenShahul one more thing, also pls arrange the videos as a playlist with respect to the topic. That makes things easy.
@Azarudeen Shahul
Sir ur Tutorial is really helpful.. It's help How to deal with Complex Scenrio..
I Enjoy ur Tutorial..Wait for more..!
Worked great.. thank you!
How can I resolve this error : Error: Could not find or load main class Dsbt.gigahorse=false
am getting below error while running the firstdemo project
org.apache.spark.SparkContext.type does not take parameters
val sc=SparkContext(conf)
I hope you figured it out if not you should try *val sc = new SparkContext(conf)*
I am getting unknown articraft error while adding dependencies. Can anyone please help me with this?
Can you please share the screenshot of error message along with build.sbt to my mail.
Hi azar .... could u pls make a video ...on SBT connection ..how to create a spark nd run it on cluster
Unrecognized VM option 'CMSClassUnloadingEnabled'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
how to solve this error
Can anyone help me to sort the below error-
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
Are you trying to setup Spark in intelliji IDE ??
@@AzarudeenShahul yes...After installing, when I am trying to execute the Spark Program in intellj, It's showing me the above errors!
I have tried setting the environment variables also but unable to fix it!!
unable to locate Hadoop binaries , pls try the below step
--> Download winutils.exe from public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe.
--> SetUp your HADOOP_HOME environment variable on the OS level or programmatically: using below command
System.setProperty("hadoop.home.dir", "full path to the folder with winutils");
let me know if it works or not
I am getting Sync failed error..
Thank you so much bro❤️
Thanks a lot. That was a great video and guidance.After a long search could locate this source for my first IntelliJ IDE scala programme.
I encountered error on Spark context and changed the programme a bit (as below) and got the result after downgrading to SPARK 2.4.7 ,SBT -1.4.1 ,SCALA 2.11.12 on IntelliJ 2021.1 x64
Code:
import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.rdd.RDD
object firstdemo {
def main(args :Array[String]):Unit ={
//val conf = new SparkConf().setMaster("local[*]").setAppName("FirstDemo")
//val sc = SparkContext(conf)
val sc = new SparkContext("local[*]",appName = "FirstDemo")
val rdd =sc.parallelize(Array(5,10,30))
rdd.reduce(_+_)
println(rdd.reduce(_+_))
}
}
Please help here, I did exactly same as mentioned in video, but getting error
Error:(2, 8) ScalaDe is already defined as object ScalaDe
object ScalaDe {
Code:-
import org.apache.spark.{SparkConf,SparkContext}
object ScalaDe {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[*]").setAppName("scalade")
val sc = new SparkContext(conf)
val rdd = sc.parallelize(Array(10,20,30))
println(rdd.reduce(_+_))
}
}
Today I have subscribed