I have worked with Java since 1999. It has made me quite a bit of money. I appreciate Mr. Gosling very much. Java is still a great language. I did C++ before my Java days and I was so much more productive in Java because I wasn't fighting mystery pointer issues.
Me since 1996 (version 1). You had to be a good programmer to use C++ properly. The pointer issues are likely due to: 1: inexperienced programmers and, 2: efforts to push a product out before it has been thoroughly reviewed and tested. These problems still exists today. Java actually dumbs down the technical aspects of software development (e.g., pointers. sockets, file i/o) as Java encapsulates this. But you are more entangled with details associated with Spring and Hibernate, so I am not sure how much has actually been simplified. Java became successful because the Internet really came of age (along with Java) in 1995. I saw this in the computer labs while earning my MSCS degree. The pointer improvements were secondary as the same inexperienced programmers are still getting the NullPointerException. The advantage with Java is an exception is immediately thrown, which could still be at runtime. In C/C++ the application could fail silently and not felt until days after the actual error occurred. So from my perspective, it was the Internet that really made Java successful. Having a language that was server compatible that could be deployed to multiple platforms made development easier and it was ideal for the Internet. I could run the exact same code source code on my workstation and on the Unix server.
People don't really complain about Java on these particular issues. Fundamentally, Java is great but it's its implementation that is poor. What he described is also how C# works but you don't see much people complaining on C# (because the implementation is superior). If anyone does complain about both Java and C#, it is likely they are a procedural developer or prefer to write disorganised code.
After using Javascript I began to appreciate Java more lol. However it is true that once you start using less verbose languages it is physically painful to type in Java.
Javascript was unrelated to java in the beginning it was called livescript and upset most of the java team when they allowed netscape to reuse the name. I was part of the few test places ourside of sun (because of friendship with avh) since it was still called oak/webrunner. The strictness of java is a good thing if needed your develop tools can help the other way around is just not as easy.
I'm taking a Java class in college rn and it is truly painful to write to the console. Having to write System.out.printf("Hello %s! ", name); just to print a name to the console is insane.
One of my favorite examples of someone sneaking about the back door was an Assembler sub-program (~100k lines of code in the system) where a change had been made by (using modern terms) crawling up the stack and zapping a variable that was in a completely different sub-program. A change to the higher-in-the-stack program broke the hack, but the symptom was that data was corrupted, long after the offending program had returned. I even understood why the change had been done the way it was - there had been a regulatory change, and as those occasionally had to be implemented in a short period of time, and doing the right way would have required modification of a half dozen other sub-programs, with accompanying code reviews, testing, etc. It was a clever solution. Unfortunately, no one ever went back and made the changes the right way. None of the C or C++ code bases I've maintained was nearly as sneaky - memory corruption and other errors were, by and large, from sloppy coding, not intentional cleverness. When I wrote my own Assembler that required any significant memory management, I wrote macros to do that - and to insure that allocated memory was properly released.
You don't need to allocate or release memory in assembler. It's all there for the taking. If you want to write a memory manager you can, but there isn't an equivalent of malloc/calloc and delete in assembler. You don't need to reserve any memory. All assembler programs are flat - unless you write specific code to guard your memory access, or you create a memory pool manager, which means you will have the equivalent of malloc/calloc and delete functions to call within that memory manager. Furthermore, with modern operating systems, you can't go across rings. So if a program is running in ring3 - a hardware device driver for example - it can't be compromised by ring 0 (user land). Moreover, modern OSs have strict control over what programs can access what, so that wouldn't be possible say on a modern Linux distro. Finally, if you are writing code that is handling sensitive data you always have to take measures to protect that data. Don't let it sit in RAM, and, if you can, encrypt at rest and in transit.
@@BitwiseMobile That doesn’t make sense. In assembler you either need to call an operating system routine to allocate memory, or call and link in malloc. You need to get the memory from somewhere. Even if you want to do your own memory management there needs to be an initial allocation.
*Van Lepthien:* _"... there had been a regulatory change, ..."_ Government regulations. Yeah, and they cause a lot more problems than just screwing up software.
Its normal on a JAVA team to have TWITS proclaiming how great it is, run into a wall, then the 'smart' person on the team creates a PATCH , leaves the company, and more BOZOS add to the code; it breaks, then they blame the person who left when they never made something work properly in the first place. Its the perfect language to DUPE middle and upper management.
The thing I LOVE about US academic & tech culture is how modest, down to Earth and spontaneous the "big minds" are there. Look at James Gosling: he 's reached "idol status" and still looks like an enthusiastic college student with a funny t-shirt. Here in Italy you would NEVER, EVER see a dude of 1/100 of his caliber behaving and dressing like that. Because he would think he has to look distinguished and important. Take a good look at Fridman and Gosling and tell me who is really the younger one at heart...
They're also complete idiots who are fully and totally detached from the real world work that real programmers do. Good luck finding any programmer worth their salt that actually subscribes to any of the ideas coming out of these peoples' mouths. These people have child-like understandings of the world and unsurprisingly also present themselves as unserious and childlike in their appearance. They aren't humble -- they're suffering from arrested development.
@@iswm I partly agree but mostly humbly disagree.. It is true that a lot of academics are "working" on useless things and very "self-referential", but it is also true that once in a while something precious does come out of research. Also.. Stroustrup is a PhD, Gosling is a PhD , Van Rossum a Master... so education can be all that bad :) As for being "like a child" it can be a very good thing if it does not mean immaturity but keeping your enthusiasm , energy, desire to play and creativity alive... And Gosling has always been working in the real world and Java did come out as it is for very pragmatic reasons... it is far from an academic toy. I am really humbled by how simply these monsters present themselves, to me they are the personification of the saying "substance over appearance" It is one of the big lessons I learnt when I compared the US culture the one of my own country, so full ad plagued by very well dressed "windbags".
It might be correct for the past however now the JVM has evolved and it's why it is used in most of companies in France. But we all know that C++ is faster than java because of the few layers and imports/includes.
Java is the reason why Minecraft takes several minutes to load, and I have to wait several more minutes for the JIT to kick in and give me a stable framerate. Oh wait, it's not that stable because of the "stop the world" garbage collection it does every few seconds.
@@hilligans1 Yeah, but "industry standard" Java practices like allocating tons of temporary objects stresses the GC. You have to fight the GC by using stuff like object pools to get decent game performance out of Java.
@TGV don't feel bad, i learnt python 1dt and then Java. After that i learnt c on my own and it's not what people make it out to be. One can learn any language in any order, it's important to keep your concepts strong but that's about it. Although i only created a sudoku solver in C and not much more, so i did program in c but not any real experience
@@VadimBolshakov System.out.println() // every one of us knows that it's a bad language design to have to write all of that in order to simply print something... Console.WriteLine() isn't a vast improvement, however.
Respect, How humble this guy is. Gave birth to a language which ruled the entire world two decades straight introduced OOPS to the world, and still no other language is able to beat it completely. Pretty sure, no other language would ever rule this long ever again.
@@twentyeightO1 No they don't. I started with c# and moved over to Java. Anytime when C# is "preffered" over Java, you can be damn sure MS had some artificial limitation or marketing in place.
I also think that Java become popular because C libraries were different in different operating systems and Java kind of normalized it and added more functionality and capability
@@frankfernandes718Java is not crossplatform, its meant for only 1 platform: JVM. Dont believe me? Try to run a class file compiled on desktop to run it in ADK (android). the only cross platform is JVM, not java compiled exwcutables (java class)
@L Not really, in my experience. You have to write still a lot of native code to make a platform independend application behave correctly in Windows, Linux and Mac, especially when it comes to UI behaviour.
@L I think so. But, I don't think it was invented because assembly was not portable according to what Gosling said. It's only as portable as a VM impl. supports it and there is a VM for the specific platform. C and C++ is also as portable as you use native OS dependent API conditionally and there is a compiler.
My gut feeling says you haven't worked with any other language if you believe that. Which is completely fine, but everyone I've talked to (that worked with languages beyond Java) say it's verbose. I also agree with it being structured, but I don't like it being enforced. At least not for personal projects.
@@PutsOnSneakers > python > disgusting Pick one, lol. Python may be stupid SLOW, but the syntax is actually really nice compared to a bunch of other languages.
One of the best aspects of C# / Java is not having to worry about managing the memory. The programmer can focus on the task to accomplish as opposed to worrying about the side issues from his code.
@@anon8510 which it ... absolutely never is. Managing some sort of allocation system in a cluster of dozens or hundreds of microservices has got nothing in common with "managing memory".
At the expense of performance. Try processing an image in JAVA and then in C++ and see what the speed difference is. JAVA is unacceptable for this type of operation, unless it is calling C++ image processing operations.
@@dionysus2006 then that's a use case for C++ and not Java for that project. Not everything has to focus 100% on performance. No memory management, easier debugging, easier maintainability, and faster development time (thus cheaper in cost) are all attractive reasons to choose Java, and that's coming from someone who's primary work is writing C++.
@@dionysus2006 The speed difference is just due to lack of vectorized operations in Java, which will be fixed soon. Then Java will be just as fast as C++.
Rapid cross-platform compatibility ("write once, run anywhere") was a big selling point for Java in the '90s. It still works great for this, so long as you're app uses standard hardware.
@@losthighway4840 Standardizing the ByteCode layer gives cohesion to all the Java tool developers working above this layer....i.e. they can design Java source-code libraries that will work in any Java Standard Edition hardware environment, no matter what platform developed on. It gives the JRE developer a smaller, well-constrained job. And it gives the Java compiler developer a smaller, well-constrained job. It is an early modularization strategy supporting complex software architecture.
@@dataflowgeometry how is it different than language standardization? I see no difference than a c++ developer who follows the language specification. You can target different hardware without issue if you follow the spec. Same with tool developers. The compilation to the machine is performed by the developer instead of JIT from the byte code. Ironically if you're doing anything that is hardware-specific in the JVM, you need to use JNI. I see no practical advantage to the architecture.
@@losthighway4840 c++ has the issue that doing anything with a gui, multithreading, and more will force use of an operating system specific library. You then need a different version of the program for each os
I enjoyed the discussion. It is exactly why I fell in love with objective-c. They formalized the memory management so that you don’t have memory leaks all while not needing a garbage collector. I think obj-c was a hidden gem of history that would have been most of not for Steve Jobs
@@christianitis Exactly, the concept is “if there is something wrong, we need to know and to know quickly”. In java development there is a methodology to anticipe and manage the “Exceptions”.
@@alexandre8869 With that logic C++ did just fine. the same methodology exists in every language. also, java is one of the most "unsecure", has been since day 1.
Bjarne Stroustrup and Gosling both invented a programming language. And both are an utter garbage. So what are talking about? Inventing programming languages doesn't make you a good programmer, doing real programming/shipping quality products does.
I love James Gosling, but I am not that fond of Java - I equally love Anders Heljsberg - you should have him on as well! Because he has created not only one of the world's most popular languages.... and it would be cool if you brought Brendan Eick on also....
Brendan Eich has recently been coming out with opinions as problematic as his language. That may detract from his no doubt fantastic war stories about the early days of JS.
Anders is amazing. from Borland Pascal to Delphi to C# to Typescript, I am always impressed with his work. I know C# started as a Java rip off, by an evil embrace-and- extend company, So didn't get the same love as java. But the company has long since come around and c# the language has grown for beyond Java and continues to evolve toward simpler ways to express things, keeping its type safety but loosing its verbosity. C# is terrific. When I think of 'null' though, and where stuff like rust is headed its clear that newer paradms will emerge. But C# is trying, with non-nullable objects (obj vs obj?) to undo the billion dollar mistake with things java and c++ never will.
No pointer bugs, but instead you get NullPointerExceptions. You don't get memory leaks, but instead you have to worry about GC pressure and GC spikes. The problems don't go away, you still have them, you just have less control over them. And with modern C++ those problems take up a very small amount of development time anyways.
It's true that Java comes with a different set of problems but they are not nearly as painful to debug as memory bugs in C++. I don't like Java but I'd certainly trade debugging NullPointerException for debugging use after free if I could. And this mantra about modern C++ being free of those problems is sort of an illusion. I work in C++ full time and we have to deal with these sorts of bugs basically all time. It's on someone plate literally every week and takes days to find and fix. I really don't understand where this belief comes from, because memory safety is really not about having a destructor run at the end of the scope and that's about how much C++ helps you with memory.
@@panstromek c++ is flawed especially in terms of memory management. I wish they could change the language and offer better and guaranteed solutions for the memory leaks in C++. But also it let's you do whatever you like to do. Freedom is all over the place in CPP. And that's why I like it the most.
Java is an interessting language for me. Most of my work is in C/C++ because required, but sometimes you want something that hides systemcomplexity so you can add problemcomplexity. For my usecase thats python, matlab or maple. But i see why some want to use Java or Javascript.
I spent a couple decades supporting both C-based and Java-based production applications, and at that time I wouldn't have said Java is any more "reliable". I spent just as much time investigating how re-write / reconfigure Java apps so they don't exhaust the heap or just become completely unresponsive during stop-the-world events as I did investigating segfaults.
@@user-mr3mf8lo7y Pointers means reference semantics and modern languages are shifting towards value semantics instead because it's less error prone and an easier abstraction for programmers to get their head around. Then there is the fact that C offers no method to free resources automatically once they're out of use like Python garbage collection or C++ RAII destructors, so C is rife with memory leaks on the one hand and access to prematurely freed memory on the other hand. Yes, a good programmer can hold all these ownership relations in his head instead of communicating them to the compiler so it can do the work, but it's a hard, error prone, and slow process.
@@JerehmiaBoaz I am afraid that's not entirely true. What pointer reference means is another topic. Not changing the reality that 'there is "no pointer bugs in C, there is only programmers lack of knowledge like anything else". C offers (freedom of) accessing low-level as much as posssible (close to Assembly), obviously, that requires attention. In addtion, garbage collection is still externaly available. Please refer to this secure source: ruclips.net/video/Htwj4jJQeRc/видео.html
To all ye who venture into the depths of the comment section, turn back, for there is nothing but darkness and embedded systems programmers. Edit: Before I start a flame war, I should mention I’m an embedded systems guy with a love of C and C++ lol. It’s okay to make jokes :-)
:) can confirm. Am embedded software engineer and I look down on your petty Java! To be serious though Java has it's place and I don't hate it but, I personally don't want to use it. I like pointers I think they're very useful. Memory allocation is very handy as well. But if you don't want to deal with all that Java is perfectly fine.
There are things in C++ that are pretty bad and however you may feel about this C++ 17 did "fix" a lot of issues. I'm very happy that smart pointers are actually useful now. I'm glad I don't have to use boost libraries anymore. Hilariously, because of all of the overhead used to make smart pointers work if your allocating an array your only really saving 17 bytes of memory as compared to just using a vector, also it does depend on the data type and your situation. If your really pressed for memory you can still use the New and Deletes in your constructors and destructors. It might sound like overkill but when you only have 720k on a device that needs to sip power it adds up. Also C and C++ can utilize functions that are defined in assembly which is very helpful. Admittedly I'm terrible at assembly, at least for now, but calling assembly defined functions in C or C++ really helps with human readability.
Wasn't a big reason because of the Java virtual machine? You could run Java on any computer as long as it had the JVM and runtime. In C and C++ you have to compile for your specific processor.
I learned java when it first came out and really enjoyed programming in that language. But it soon became a very bloated language that took away its creativity in exchange for efficiency. I worked in a biotech company's building across the parking lot of Sun when Sun occupied a single building in Mountain View. Those were the good old days, lol.
Funny thing is there's NOTHING efficient about OOP's forced pointer-chasing. It works against the machine you're programming in every conceivable way. Memory layout should be your first and primary concern when programming, everything else is just algorithms to process and transform that data. You cannot write an efficient program without explicit memory management. Period.
People always complain about the verbosity and bloat that Java has. But when they program large systems in non OO languages, they realise the power of structure. Would I write a AWS Lambda in Java? Not really. Would I write a semi-complex microservice that has a decent amount of code in it in Java/Spring? Yes I would. I find, the reasons to use plain Java nowadays is disappearing with the rise of Rust/Go, even JS. More often than not, if you want to use Java, you probably don't, it's probably Spring you want to use. I've not used plain Java in years. But when I need to write a service that has a bunch of integration with other systems and need a lot of the fluff around it (audit, logging, tracing, circuit breaking, etc etc), I'd use Spring/Java. People often turn this into a war of "Java/Spring is bad compared to (insert some uncommon but 4Chan /g/ approved low level language that is complex but they love it because it makes them feel smart that they can program in it)", but often, they forget the last crucial bit which is support. Spring is far more maintained than the Rust framework they are shilling over. And when it comes down to it, hiring Spring Java devs is a tonne easier than Rust/Go. I come from a C, C++ background who learned the effectiveness Java/Spring has, who also loves writing things in Rust and Go, and front-ends in JS. But I understand the scenarios for each. People need to avoid the "only tool is a hammer, all problems look like nails" attitude.
On this, Kotlin offers Java interop. You also gain null safety. Not the true safety of Rust, since null still exists in Kotlin. But Kotlin is really a modern Java, if Java wasn’t tied to so much legacy.
Virtual threads are also stable in Kotlin (coroutines) that are similar to Golang goroutines. So if you choose Spring over Go and other alternatives, it is worth considering Kotlin JVM
@@RogueTravel I like Kotlin, but again, it comes down to support. Hiring a Kotlin dev isn't easy, not if you're doing non-Android stuff. We had a guy who wrote a bot internally in Kotlin and did a bunch of Spring stuff with it. It worked fine, I can read it and use it and maintain it, but nobody else could unless they wanted to learn Kotlin, which they didn't because their jobs where maintaining a bunch of Java/Spring services. So again, for me it comes back to the situation of, sure, there's great languages out there better than Java in many ways, but is it worth looking into those when your workforce is getting on find with Java/Spring. Unless it's true legacy like COBOL or Perl, there's no need to rewrite a java/spring service if it doesn't give you any problems and people can still upgrade it and maintain it easily.
@@masonmunkey6136 It is most of the time. I'm still baffled when people take a completely conclusive stance on one language and recommend adopting their preferred language which is obscure and not well known.
For people complaining about NullPointerExceptions in Java and saying how it's not an improvement over cpps pointer bugs, let me explain. The NPE and others are exatcly that, something to manage, debug, control and eventually eliminate invalid pointers. It's not a pointer bug, it's a pointer feature, where the concept of a null pointer or in general a null value now became a usefull tool in programming/debugging itself. Rather than silently going on and allowing unpredictable behavior, the JVM will "fail quickly" and manageably if it encounters an invalid pointer but only if it's used in the wrong way.
He developed JAVA in Calgary as far as I remember. He worked for Sun Microsystems and the were located in the Sun Life Plaza, I remember the SE tower. They were a customer of mine. I remember going into Sun and all their computers were running Solaris. My understanding was that their goal was more to do with a programming platform that would run on any machine, Motorola, Intel, MOS, Alpha, etc. At least that's how I remember. My neighbour was a Java Programmer. He made a lot of money and eventually moved away from Calgary over to Boston.
I love Java. I learned 6502 Assembler, Basic, Pascal, 68000 Assembler, C, C++ and then Java - starting with 1.0 or possibly 1.1. Like what you're saying, Java was the first time you could just get on with what you were planning and didn't have to burden yourself with problems that other languages created. James Goslings contribution to the computing industry will not be fully appreciated for some time to come - the fact that Microsoft just copied the idea and called it C# is the ultimate compliment. I never understood why the JVM wasn't baked into silicon - it's been around long enough to be implemented in hardware. I'm sure there's a good explanation.
@Exzavier As usual "it depends". If you want to get closer to the machine and hardware, you should learn C/C++. If that's not a big interest and portability is important then Java is best. I have never liked multiple inheritance that C++ provides. To me, single inheritance that Java provides is a better model (cleaner) and Interfaces provide any other necessary class characteristics. Java GC is a bit of a revelation if you go from C to Java - malloc no more 😃
makes a lot of sense, i would prefer using c or c++ to make a video game or some other single user application but for business things java or a jvm language like kotlin seems better
Just fixed a bug in C that was using a byte for indexing a 256 elements sized array. The idea being that 7-10=253, so the index would always be inside the array of 256 elements. E.G: A=Array[ByteSizedVariable-10]; Turns out the C compiler was type casting the byte to an int before indexing. So 7-10 was actually -3 for the index value.
C compiler did what it supposed to do, you have operation between byte and int so it will convert byte (c do not have byte type it have char) to int and then do operation (cleary stated in standard). By the way char not nessery hold valued of in range from 0 to 255 (char is in that way specil) it can hold that or can hold valued from -128 to 127 or from 0 to 255 and that is open to implementation to pick is char signed or unsigned (standard do not define it that why you have signed char and unsigned char as standard integer types, all other integer types are by default signd). By the way if you have operation on two char and assigned it to int operation results still will be done on chars and then result which is char (after all operations are done) will be converted to int. All of that is well defined behaviour in standard, anyone who make bugs like that show he do not know to write in C (and probably in any other programming languages) that is same as someone decide to write book and don't know all letters.
@@mrlazda A byte is simply an unsigned char. The fix was simply to do the math operation first. ByteSizedVariable-=10; A=Array[ByteSizedVariable]; works just fine to index the 256 element sized array. I imagine that A=Array[((byte)(ByteSizedVariable-10))]; would force the compiler to do it properly too (might need more parenthesis though). There is nothing in the K&R book about type casting it to a signed int, doing the subtraction, then using it to index. As for other programming languages, I wrote assembly code for over twenty years and knew all the opcodes for several processors by heart. There is absolutely no context to an opcode, unlike C code. I had a really hard time learning C because it is so context sensitive. Although, I do like the fact that I am over ten times as productive with it.
@@JaimeWarlock there is in K&R but maybe not exact but it implying from it. There is definition that in any math C expand to biggest data type, so if you have char and int both before operation will be expand to int, and there is other part to it array index is an integer number and its size should correspond to the maximum size of a pointer variable on your platform (usually long int) so after operation on int and int it will be expand ro long int and long int would be used for index. By the way it would work by just casting 10 to char ( just do array[bytevariable - (char)10], if you mix signed and unsigned variables of same rank then they are promoted to unsigned) Whole problem is that C compiler se that 10 as int and not as for us logicly as char (that is documented as I remember). I can't tell you exactly pages where is that written in K&R last time I read it was like 30 years ago (I switched mostly to C++ just for simple reason that in that time c++ allowed to define variables anywhere not just on beginning on block, which is after C99 allowed in plain C too but in early version was not, it worked in most compiles but was not according to standard) and now will be not easy me to find where I puted it (it is in some box in basement), K&R I have is I think first edition (maybe second I am not sure but it was published 1984 (translation to my language)
"So like we were interested in stuff and like we took epic roadtrips and stuff..." This is too good: Just like Java itself the creators story has way too much vague and unspecific, unnecessary boilerplate no one needs. This all makes so much sense now.
I still trust the original wise balance of the Bell Lab guys, for some reason. The fundamental flaws of software programing originate from the hardware architectures we still use. And of course the fact that good programmers don't grow in trees. No language is perfect and Java for use proved to be a very useful language but of course it has many shortcomings.
Reminds me of 1996, my first C/C++ job. Inherited a large program to clean up and debug in W95. Most fun issue: totally random and very infrequent bsod. Long story short. (I eventually found this)… char msg[3]="ON"; then later, buried somewhere in a function sprintf(msg, "OFF"); then ( tick tick tick tick) 🌋🐵
Should have just made C better like: add features to confine and track pointer use, add built-in memory tools for diagnosing and detecting memory corruption, stack overruns, pointer misuse, and memory leaks. Developing Java was the "blackjack and h00ker's" option. The reason we had to rely on external debugging libraries, tools (valgrind), and homegrown solutions for memory debugging, because the language lacked the right features.
Pointers have never been in the problem in c, memory management has always been the problem. For most of us having to keep track of memory usage was always overkill, most of the time your pointers were easily managed its only when you get into address manipulation that it got weird and dangerous. Java solved both problems but it has taken far too long for the jvm to be good and the java language needs a revamp.
@@thecodeninjaeu hahahaha, im always amused that people think that a certain person must be the authority of all things relating to X. Instead of trying (poorly) to insult me why not tackle what i said. Ive used pointers and ive done memory management, pointers were never a problem for me but trying to keep memory managed and released at the right time was. I happily jumped to java because i was sick of issues with memory management but i missed the simplicity of pointers, references only brought a different set of problems, for example even now there is still debate about we are passing by value.
I remember the first iterations of Java that had horrible garbage collection, until the language developers copied how the Lisp language implemented the two kinds of garbage collection (static vs. dynamic). It's a shame Java didn't also copy multiple class inheritance like other languages putting the burden on developers to literally build up their own multi-class hierarchy. Soapbox statement: Lisp language has the best multi-class order dependent inheritance model of any language (even better than C++ inheritance model that isn't inheritance order dependent).
Multiple inheritance was and remains a bad idea... Inheritance should be used sparingly to start with, let alone using it to glue random unrelated classes together.
@@davidlloyd1526 Totally disagree from my own experience of developing a graphical planning and scheduling system co-wrote in lisp/clim code back in the 80s took only 32k lines on a Lisp machine and ported within a few months over to Sun workstations and was easy to maintain and adapt for different regimes of activity planning with resource and temporal constraints (spacecraft/DSN scheduling, rovers - was used at Jet Propulsion Laboratory for over a decade on a variety of projects). When they converted it over to Java/C combination the result was over 300k lines of code and and took a lot of maintenance to nearly replicate the functionality.
Java made a good point of write once and run anywhere. Where the operating system is Windows, Linux, Mac OS, Solaris etc., Java was easy to learn and master and it became so popular that it became a phenomenon!!!👍👍👍👍👍👍.
No they don't. C++ doesn't guarantee that smart pointers are what they promise to be, i.e. you can have many pointers to something that's supposed to be a unique_ptr, you can have dangling smart pointers etc. Smart pointers cannot substitute for garbage collection. Not to mention that they are much slower since Java's GC has a faster allocator and doesn't have to spend time churning the reference counts.
I've written a lot of C++ code Delphi before that. As long as you have policies in your mind, as long as you have clear definition of ownership, you do not need a GC to write stable code, this however do not hold when you work in a big team with various expence levels. Java is this, a language designed for needs of enterprise with a lot of employees.
Underrated comment here. The language is never wrong. Every language (and most stack technologies) provides a means of shooting yourself in the foot, but it never forces you to aim there.
If the language leaves room for errors by devs, it is a design flaw, not just the devs messing up. C++ has been around for over 50 years and these pointer bugs are still there. You have to be blind to still assume that it's just the devs.
@@metallicafan5447 Blaming a language for bad code is like blaming a gun for murder. You might just be a shit programmer. At least consider the possibility.
@@VoteForBukele Well, the gun has a lot of spikes on it, and so you might hurt yourself unless you are an expert at C++. And as far as I am concerned, there are about 10 C++ experts in the world. Rust, for example, is much more rigorous feedback on potential bugs than typical C++.
As a current python programmer and former java programmer, I do kinda miss the access modifiers. I know you can mangle names and such and I don't have a technical reason, I just don't like looking at the underscores. Purely preferential. I'm also not a very experienced **anything** programmer, so there's probably something I'm missing.
No programming language will correct a bad design. As a CIO for decades it amazing to speak to a programmer about a problem and watch them run to thier desk and begin to code. Then they come to me with their work and i toss it right in the garbage. I ask them "where is your design?" I get a blind stare. I would ask "does a carpenter build a house w/o a design? Why should you code something w/o a design? Go back to your desk and being back a flowchart, then ill tale a look" I will NEVER hire a programmer that cant design. Good documentation can't be beat!
"Literal roadtrips: It's like, get on an airplane, go to Japan..." I don't know if he's having difficulty with the word "literal" or with the word "roadtrip", but that's not a literal roadtrip.
Many folks learned the concept of an interface in school, but they typically don't apply it in their code. When trying to explain an interface, I like to use a tv remote analogy. For a user, the tv remote is the 'interface' to the tv. It has a fixed set of functionality that needs to be tested. If I am programming the tv side of things and know users are only ever going to use the tv remote (hey, it's my analogy so I get to make the rules), then I have a controlled way for the user to interact, which drastically reduces what I have to test and maintain. If however, the user 'reaches around' and starts messing with the tv without the remote, now I have to test and maintain all that additional functionality, plus have to try to proof against unintended usage. Using interfaces has another huge benefit - that is, the complete implementation behind the interface can be changed, and as long as the interface signature remains unchanged, the interface user has no idea anything has changed. If programmers allow the reach around into the implementation then it's much harder to change backend code without breakage.
And there’s limitation to your example. Nobody should be writing an interface for one TV remote! It’s called an interface. In common parlance, people expect an interface to be used for 10,100,1000? Consumers? Nothing really more pointless to see interface code for one tv remote.
@@AirborneTrojan I am unaware of any restrictions that require a minimum number of callers before it's ok to use an interface. Perhaps this interpretation is why interfaces aren't used as much as they should be?
@cncsphere, my point is not about technical feasibility, but rather the craft of coding. Ultimately, developers read code written by other developers. It’s a bit over-wrought to see an interface created for one method for one implementation. If you know you’re only making one “tv remote”, is it really worth the effort? Maybe after some time you come back to this code and it becomes more clear that additional interface items and implementation warrant this.
Great - except ATMs in 1988 are FASTER than now. All the examples I've seen with JAVA are 'stupid and superficial ' if you use JAVA for a REAL elevator - it will get stuck. To this day ppl hand me a laptop and say the RUNTIME is crashing or super slow.
@@iswm Yes perhaps thats so. I have used C/C++ and java (currently) for many years. So have had do all the memory management stuff related to C/C++, whilst this is a necessity in C/C++ its nonetheless an inconveneience at best and a real pain at worst to track down the memory related bugs easily. Having used java for around 15 years, I would honestly say I don't miss or need the extra memory management requirements of C/C++. Java did start off as a slow and slightly limited language subset but has steadily grown to be a stable and mature general purpose language, I have used in many different software projects. Yes the compiled languages will have the edge in speed and performance, but I think that advantage has steadily become less of an issue.
I never understood James Gosling interviews. I know his background: MIT AI lab, Lisp machines, implementing x86 lisp machine emulator to run Lisp text editing macros, that would later become Emacs, then getting hired by Sun as a virtual machine expert. Yet in the interview he rarely if ever mentions this and always pushes a dumbed-down narrative that would appeal to beginner programmers. Maybe it's good from the marketing perspective, but it feels very dishonest.
He's an academic. They don't understand what real-world programmers do. He made Java as a result of his own incompetence and thinks everyone else needs the same hand-holding, completely oblivious to the wants and needs of actual real-world programmers doing actual serious work.
"No system of energy can produce sum useful energy in excess of the total energy put into constructing it. This universal truth applies to all systems. Energy, like time, flows from past to future".
The problem is not the pointers, but the "programmers" not using them correctly 🙄 Even having smartpointers is of no use if they are the only "smart ones" around...
I think he means things like having a pointer x to a struct with some variable y (and nothing else). Now, x.y will be the same as *x on every sane compiler. Now add another variable, say a. Now, *x can be x.a or x.y, depending on the compiler.
@@davy360 i have never tried to dereference a pointer to a structure in the way you mention (only because it is illogical). Does GCC allow that? I hope It warns, at least. Otherwise, it would another example of C ambiguity
He is a legend in the field of Computer Science. Java is still the best programming language there is. It's so amazing to hear the perspectives of how they went about developing Java as a language. In any large scale organisation Java works a lot better than Python even if you have to write a few extra lines of code because you have really organised and structured way of doing things. Java beats Python any day.
@@schmetterling4477 Disagree. Most of the IDEs come with auto complete and suggestions features. It's not too much typing at the end of the day. 99% of all developers code on IDE and not on a Text editor.
@@suryac850 I have coded in Java exactly once. It was the integration of a complex simulation tool into Eclipse. I even got it to work halfway in like three weeks. That was one time too many, though. ;-)
Java is ok. Not a silver bullet and not my first choice. It's having to use Maven that made me sour completely on it. Yes, I know that's optional but only if you have control of the project which I didn't
Jaw dropping 😮moment really when he says at 16:45 giving an example of a banking software called account reconciliation system because that’s exactly what I face and work on everyday and that’s exactly is the issue that I see hence we are trying to move on to Java or other language.
I have no idea where he was going with his road trip to random companies stories and the problems in computing and how that lead to Java. The interviewer cut him off and directed him to talk about pointers in C.
Been programming in C for 3 decades and am happy for it; I manage memory and prevent dumb sh!t with pointers by being a diligent and disciplined programmer. I think that mindset is what's missing today.
that‘s absolutely true! i wanna be a programer and don‘t want to touch the hard stuff, memory management. it‘s like working in a car repair shop, but don‘t want to get dirty hands…
@@jolterolidude2115 Honestly? I don't blame you one bit. Some days, like yesterday, in fact, dealing with the bits and bytes of data packing and extraction with addresses (sometimes) unknown can veer one toward madness. Fortunately, my wife is a willing sounding board, otherwise this comment would read more like skark who and something balls do I w3ll 0r mmm... wiggle-WIGGLE, I SAID!!! 🤪
@@JoJo2251221 Did mean to offend. But the reality is that whatever interpreted language you're using, you're relying on the work of people like me. *Somebody* has to wrangle those pointers. You just hope that they get it right, because, if they don't, you're screwed.
@@JoJo2251221 Oh, I guess I should provide some basis for my original comment. Largely, I'm looking at webdev, more so than just the heavy reliance on interpreted languages. How many frameworks does one need? It's insane that what's good today is considered "crap" tomorrow. Much of the development anxiety that some programmers feel can be attributed to the pressure to keep up with the latest buzzwords... but it's all just a bunch of Band-Aids slapped over a core wound.
I felt Gosling spent the first 7 minutes describing things that Java never has and never will excel in. Looking at what Java is today; I can't shake the impression that the goal was (and always has been); lowering the level of difficulty for OOP by abstracting it from the underlying system (including memory management). Essentially reducing software engineering to be purely about logic and not about electronics. Not sure what all that other talk was about.
Speaking of programmers making the same mistakes that have already been solved, it still goes on today. The industry as a whole doesn't do a great job of passing down knowledge to the newer folks.
Working as an engineer for over 20 years I can not stand arrogant people who think they know better without digging into the details. There is always a good reason things have been implemented in a certain way, the devil is in the detail. We have moved from c++ to Java and I can tell you Java sucks big time when it comes to performance critical tasks. We had to use a lot of JNI tricks to bypass those issues.
I like the concept of Code reusabillity of Java because of object oriented nature. The inheritance combined with compostion helps to reuse the code through code extension. For instance in Decorator pattern you can extend the functionality of a already written code through the usage of inheritance and composition. Imperative or functional languages do not give you that advantage. All the cloud application or big data applications usage java a lot. In additiona memory utility much safer. Off course every language has its own utility
For all the hate toward Java's ugly verbosity, it's a remarkable language that was in the right place at the right time and earned every bit of its success.
Programmers place too much value on how fast the can write buggy code. Slow down a bit, be a little more verbose for clarity, and do it right the first time. It saves time in the long run.
And it still has pointers under the hood because it’s a fundamental thing to the hardware to have them. Btw. C and C++ can be run cross platform with a library like Cosmopolitan..
And oh gee, it still didn't fix the issue of null pointer dereferences. OOP has been a 40 year waste of absolute futility and has irreparably harmed the entire software industry beyond repair.
@@iswm The problem is though not nullptr’s it’s the constraints which aren’t dealt with correctly even today. There isn’t a single language that was able to fix this without throwing away freedom of design or iteration times. And yes OOP was and is always a mess where people invented things that no one did before like managers ..
If you are writing pointer bugs in modern c++ you don't know what you are doing. Together with static analyzers, general programming expertise and smart pointers, pointer bugs should never occur. On top of that, you can write your own allocation tracers to catch memory leaks in production. I just hate the argument that C, and in particular C++, are these unsafe languages that nobody can work with. It honestly sounds like language propaganda.
Programmers who are scared of memory shouldn't be programmers. Memory management is just as important as the cpu instructions, yet everyone wants to sweep it under the rug and pretend it's some impossibly dangerous undertaking to reason about how your data is stored. So instead they chase pointers around endlessly thrashing the cache and absolutely killing the performance of anything they try to write. Then they have the gall to argue with anyone who actually knows what a cache line is about how burning hundreds of cycles every time they do a load is somehow a good thing.
Exception in thread "main" java.lang.NullPointerException
just catch it lmao
Damn 🤣🤣
as I wrote in other comment, latests versions give enough info aboit NPEs
With experience null pointers are not an issue
*Laughs in Kotlin*
I have worked with Java since 1999. It has made me quite a bit of money. I appreciate Mr. Gosling very much. Java is still a great language. I did C++ before my Java days and I was so much more productive in Java because I wasn't fighting mystery pointer issues.
Yep, instead you get to fight a mysterious JVM - that you don't have control over. At least with C++ you could find your bugs.
Very true. C++ is for real men
@@rasimbot Well I must be a real man because I still code in it....and Java and golang and JavaScript....
Me since 1996 (version 1). You had to be a good programmer to use C++ properly. The pointer issues are likely due to: 1: inexperienced programmers and, 2: efforts to push a product out before it has been thoroughly reviewed and tested. These problems still exists today. Java actually dumbs down the technical aspects of software development (e.g., pointers. sockets, file i/o) as Java encapsulates this. But you are more entangled with details associated with Spring and Hibernate, so I am not sure how much has actually been simplified.
Java became successful because the Internet really came of age (along with Java) in 1995. I saw this in the computer labs while earning my MSCS degree. The pointer improvements were secondary as the same inexperienced programmers are still getting the NullPointerException. The advantage with Java is an exception is immediately thrown, which could still be at runtime. In C/C++ the application could fail silently and not felt until days after the actual error occurred. So from my perspective, it was the Internet that really made Java successful. Having a language that was server compatible that could be deployed to multiple platforms made development easier and it was ideal for the Internet. I could run the exact same code source code on my workstation and on the Unix server.
How's the weather in Bombay?
08:57 early nineties, bugs, pointers, buffer overflows, chrome 10:36 concurrency, complex data structures, storage leaks, developer velocity, mystery pointer bugs
Very informative video on why Java is the way it is. Those things that everyone complains about have solid reasons for being there.
on god!!
not really, some are just for backwards compatibility
@@iAM80tv excuse me?
@@PutsOnSneakers he's agreeing
People don't really complain about Java on these particular issues. Fundamentally, Java is great but it's its implementation that is poor.
What he described is also how C# works but you don't see much people complaining on C# (because the implementation is superior).
If anyone does complain about both Java and C#, it is likely they are a procedural developer or prefer to write disorganised code.
After using Javascript I began to appreciate Java more lol. However it is true that once you start using less verbose languages it is physically painful to type in Java.
Javascript was unrelated to java in the beginning it was called livescript and upset most of the java team when they allowed netscape to reuse the name. I was part of the few test places ourside of sun (because of friendship with avh) since it was still called oak/webrunner. The strictness of java is a good thing if needed your develop tools can help the other way around is just not as easy.
and that's when Kotlin comes to save the day
Kotlin understands your pain and wants to show you a better way
Javascript is simple and elegant. Java is unreadable in comparsion. If its one line in javascript its like 6 lines in java.
I'm taking a Java class in college rn and it is truly painful to write to the console. Having to write System.out.printf("Hello %s!
", name); just to print a name to the console is insane.
One of my favorite examples of someone sneaking about the back door was an Assembler sub-program (~100k lines of code in the system) where a change had been made by (using modern terms) crawling up the stack and zapping a variable that was in a completely different sub-program. A change to the higher-in-the-stack program broke the hack, but the symptom was that data was corrupted, long after the offending program had returned. I even understood why the change had been done the way it was - there had been a regulatory change, and as those occasionally had to be implemented in a short period of time, and doing the right way would have required modification of a half dozen other sub-programs, with accompanying code reviews, testing, etc. It was a clever solution. Unfortunately, no one ever went back and made the changes the right way.
None of the C or C++ code bases I've maintained was nearly as sneaky - memory corruption and other errors were, by and large, from sloppy coding, not intentional cleverness.
When I wrote my own Assembler that required any significant memory management, I wrote macros to do that - and to insure that allocated memory was properly released.
You don't need to allocate or release memory in assembler. It's all there for the taking. If you want to write a memory manager you can, but there isn't an equivalent of malloc/calloc and delete in assembler. You don't need to reserve any memory. All assembler programs are flat - unless you write specific code to guard your memory access, or you create a memory pool manager, which means you will have the equivalent of malloc/calloc and delete functions to call within that memory manager. Furthermore, with modern operating systems, you can't go across rings. So if a program is running in ring3 - a hardware device driver for example - it can't be compromised by ring 0 (user land). Moreover, modern OSs have strict control over what programs can access what, so that wouldn't be possible say on a modern Linux distro. Finally, if you are writing code that is handling sensitive data you always have to take measures to protect that data. Don't let it sit in RAM, and, if you can, encrypt at rest and in transit.
@@BitwiseMobile That doesn’t make sense. In assembler you either need to call an operating system routine to allocate memory, or call and link in malloc. You need to get the memory from somewhere. Even if you want to do your own memory management there needs to be an initial allocation.
*Van Lepthien:* _"... there had been a regulatory change, ..."_
Government regulations. Yeah, and they cause a lot more problems than just screwing up software.
Its normal on a JAVA team to have TWITS proclaiming how great it is, run into a wall, then the 'smart' person on the team creates a PATCH , leaves the company, and more BOZOS add to the code; it breaks, then they blame the person who left when they never made something work properly in the first place. Its the perfect language to DUPE middle and upper management.
@@crabbcake You should see some of the C code that is out there. And COBOL, and Perl, and Python. It's not the language, it's certain people.
The thing I LOVE about US academic & tech culture is how modest, down to Earth and spontaneous the "big minds" are there. Look at James Gosling: he 's reached "idol status" and still looks like an enthusiastic college student with a funny t-shirt. Here in Italy you would NEVER, EVER see a dude of 1/100 of his caliber behaving and dressing like that.
Because he would think he has to look distinguished and important.
Take a good look at Fridman and Gosling and tell me who is really the younger one at heart...
They're also complete idiots who are fully and totally detached from the real world work that real programmers do. Good luck finding any programmer worth their salt that actually subscribes to any of the ideas coming out of these peoples' mouths.
These people have child-like understandings of the world and unsurprisingly also present themselves as unserious and childlike in their appearance. They aren't humble -- they're suffering from arrested development.
@@iswm I partly agree but mostly humbly disagree..
It is true that a lot of academics are "working" on useless things and very "self-referential", but it is also true that once in a while something precious does come out of research. Also.. Stroustrup is a PhD, Gosling is a PhD , Van Rossum a Master... so education can be all that bad :)
As for being "like a child" it can be a very good thing if it does not mean immaturity but keeping your enthusiasm , energy, desire to play and creativity alive... And Gosling has always been working in the real world and Java did come out as it is for very pragmatic reasons... it is far from an academic toy.
I am really humbled by how simply these monsters present themselves, to me they are the personification of the saying "substance over appearance"
It is one of the big lessons I learnt when I compared the US culture the one of my own country, so full ad plagued by very well dressed "windbags".
Fun fact, it's called Java because you get lots of time to drink coffee while you wait for the runtime to boot up
It might be correct for the past however now the JVM has evolved and it's why it is used in most of companies in France.
But we all know that C++ is faster than java because of the few layers and imports/includes.
wanna learn design patterns in less than 1 sec?
Get out of Java and learn something else
Java is the reason why Minecraft takes several minutes to load, and I have to wait several more minutes for the JIT to kick in and give me a stable framerate. Oh wait, it's not that stable because of the "stop the world" garbage collection it does every few seconds.
@@camthesaxman3387 no its not, do you understand anything going on in minecraft. The game is slow because its coded terribly
@@hilligans1 Yeah, but "industry standard" Java practices like allocating tons of temporary objects stresses the GC. You have to fight the GC by using stuff like object pools to get decent game performance out of Java.
Very thankful Java was the first language I really learned.
Been waiting for this episode!
Same and one of my fav programming language.
I started with two human languages.
It's to wordy compared to C#
@TGV don't feel bad, i learnt python 1dt and then Java. After that i learnt c on my own and it's not what people make it out to be. One can learn any language in any order, it's important to keep your concepts strong but that's about it.
Although i only created a sudoku solver in C and not much more, so i did program in c but not any real experience
@@VadimBolshakov System.out.println() // every one of us knows that it's a bad language design to have to write all of that in order to simply print something... Console.WriteLine() isn't a vast improvement, however.
Respect, How humble this guy is.
Gave birth to a language which ruled the entire world two decades straight introduced OOPS to the world, and still no other language is able to beat it completely. Pretty sure, no other language would ever rule this long ever again.
Simula introduced OOP in 1967. For completely object oriented solutions, people tend to choose C# over Java, so in that sense C# has beaten java lol.
@@twentyeightO1 No they don't. I started with c# and moved over to Java. Anytime when C# is "preffered" over Java, you can be damn sure MS had some artificial limitation or marketing in place.
I also think that Java become popular because C libraries were different in different operating systems and Java kind of normalized it and added more functionality and capability
right once class file was created it can work on linux, windows and mac which was not the case in c/c++ thats why it was called platform independent
@@frankfernandes718Java is not crossplatform, its meant for only 1 platform: JVM. Dont believe me? Try to run a class file compiled on desktop to run it in ADK (android). the only cross platform is JVM, not java compiled exwcutables (java class)
@L Not really, in my experience. You have to write still a lot of native code to make a platform independend application behave correctly in Windows, Linux and Mac, especially when it comes to UI behaviour.
@@normusdoar Exactly.
@L I think so. But, I don't think it was invented because assembly was not portable according to what Gosling said. It's only as portable as a VM impl. supports it and there is a VM for the specific platform. C and C++ is also as portable as you use native OS dependent API conditionally and there is a compiler.
I work with Java since 2014. And contrary to the guys who thinks it's heavy, verbose I don't. I love how structured it is.
Yea it's the structure that keeps me from moving over to something disgusting like python or. JavaScript's wannabe java variant called TypeScript
If the only tool you have is an OOP hammer, everything looks like an object.
huh?
My gut feeling says you haven't worked with any other language if you believe that. Which is completely fine, but everyone I've talked to (that worked with languages beyond Java) say it's verbose. I also agree with it being structured, but I don't like it being enforced. At least not for personal projects.
@@PutsOnSneakers > python
> disgusting
Pick one, lol. Python may be stupid SLOW, but the syntax is actually really nice compared to a bunch of other languages.
One of the best aspects of C# / Java is not having to worry about managing the memory. The programmer can focus on the task to accomplish as opposed to worrying about the side issues from his code.
@@anon8510 which it ... absolutely never is. Managing some sort of allocation system in a cluster of dozens or hundreds of microservices has got nothing in common with "managing memory".
At the expense of performance. Try processing an image in JAVA and then in C++ and see what the speed difference is. JAVA is unacceptable for this type of operation, unless it is calling C++ image processing operations.
@@dionysus2006 then that's a use case for C++ and not Java for that project. Not everything has to focus 100% on performance. No memory management, easier debugging, easier maintainability, and faster development time (thus cheaper in cost) are all attractive reasons to choose Java, and that's coming from someone who's primary work is writing C++.
@@dionysus2006 The speed difference is just due to lack of vectorized operations in Java, which will be fixed soon. Then Java will be just as fast as C++.
@@cmxpotato Java and maintainable? Good one
Rapid cross-platform compatibility ("write once, run anywhere") was a big selling point for Java in the '90s. It still works great for this, so long as you're app uses standard hardware.
Never really understood the point of this. Compilers exist for all the platforms JVMs exist for.
@@losthighway4840 Standardizing the ByteCode layer gives cohesion to all the Java tool developers working above this layer....i.e. they can design Java source-code libraries that will work in any Java Standard Edition hardware environment, no matter what platform developed on. It gives the JRE developer a smaller, well-constrained job. And it gives the Java compiler developer a smaller, well-constrained job. It is an early modularization strategy supporting complex software architecture.
@@dataflowgeometry how is it different than language standardization? I see no difference than a c++ developer who follows the language specification. You can target different hardware without issue if you follow the spec. Same with tool developers. The compilation to the machine is performed by the developer instead of JIT from the byte code. Ironically if you're doing anything that is hardware-specific in the JVM, you need to use JNI. I see no practical advantage to the architecture.
@@losthighway4840 c++ has the issue that doing anything with a gui, multithreading, and more will force use of an operating system specific library. You then need a different version of the program for each os
@@_kiyazz multithreading was added with c++11. Nobody used native guis in Java, so they’re in the same boat.
I enjoyed the discussion. It is exactly why I fell in love with objective-c. They formalized the memory management so that you don’t have memory leaks all while not needing a garbage collector. I think obj-c was a hidden gem of history that would have been most of not for Steve Jobs
Very interesting interview which gives us another look at the fundamental reasons for the creation of Java: the industrial security of software.
java.lang.NullPointerException
@@christianitis Exactly, the concept is “if there is something wrong, we need to know and to know quickly”. In java development there is a methodology to anticipe and manage the “Exceptions”.
"Industrial security of software" Absolutely false and log4j proved that.
@@alexandre8869 With that logic C++ did just fine.
the same methodology exists in every language.
also, java is one of the most "unsecure", has been since day 1.
Comment section is full of great computer scientists who have invented 10 programming languages
Probably the stack overflow guys, only ones who would watch an episode this boring.
Bjarne Stroustrup and Gosling both invented a programming language. And both are an utter garbage. So what are talking about? Inventing programming languages doesn't make you a good programmer, doing real programming/shipping quality products does.
@@uipo1122 surprised you didn't mention brendan eich, javascript guy
@@uipo1122 What are YOU talking about?!
@@exnihilonihilfit6316 Did you forget how to read? It's in the message.
I love James Gosling, but I am not that fond of Java - I equally love Anders Heljsberg - you should have him on as well! Because he has created not only one of the world's most popular languages.... and it would be cool if you brought Brendan Eick on also....
@Melon Husk melon husk did not approve 😂
I think it sounds great Espen
Brendan Eich has recently been coming out with opinions as problematic as his language. That may detract from his no doubt fantastic war stories about the early days of JS.
Anders is amazing. from Borland Pascal to Delphi to C# to Typescript, I am always impressed with his work. I know C# started as a Java rip off, by an evil embrace-and- extend company, So didn't get the same love as java. But the company has long since come around and c# the language has grown for beyond Java and continues to evolve toward simpler ways to express things, keeping its type safety but loosing its verbosity. C# is terrific. When I think of 'null' though, and where stuff like rust is headed its clear that newer paradms will emerge. But C# is trying, with non-nullable objects (obj vs obj?) to undo the billion dollar mistake with things java and c++ never will.
Until optional came around Java had a null pointer dereferencing problem. C++ has smart pointers now and optional as well.
No pointer bugs, but instead you get NullPointerExceptions. You don't get memory leaks, but instead you have to worry about GC pressure and GC spikes. The problems don't go away, you still have them, you just have less control over them. And with modern C++ those problems take up a very small amount of development time anyways.
It's true that Java comes with a different set of problems but they are not nearly as painful to debug as memory bugs in C++. I don't like Java but I'd certainly trade debugging NullPointerException for debugging use after free if I could. And this mantra about modern C++ being free of those problems is sort of an illusion. I work in C++ full time and we have to deal with these sorts of bugs basically all time. It's on someone plate literally every week and takes days to find and fix. I really don't understand where this belief comes from, because memory safety is really not about having a destructor run at the end of the scope and that's about how much C++ helps you with memory.
@jeff ronald gdb knows. And ASan too.
@@panstromek c++ is flawed especially in terms of memory management. I wish they could change the language and offer better and guaranteed solutions for the memory leaks in C++. But also it let's you do whatever you like to do. Freedom is all over the place in CPP. And that's why I like it the most.
in the latests java versions the NPE gives the needed information to debug it
Debugging NPEs is so much easier than pointer related bugs in c lmao
Java is an interessting language for me. Most of my work is in C/C++ because required, but sometimes you want something that hides systemcomplexity so you can add problemcomplexity. For my usecase thats python, matlab or maple. But i see why some want to use Java or Javascript.
I may hate Java, but I have one word for what he says: inspiring.
I spent a couple decades supporting both C-based and Java-based production applications, and at that time I wouldn't have said Java is any more "reliable". I spent just as much time investigating how re-write / reconfigure Java apps so they don't exhaust the heap or just become completely unresponsive during stop-the-world events as I did investigating segfaults.
signal 11 received - core dumped
Exactly. Gosling might rant about C++ pointer bugs in Chrome but thank god it wasn't written in Java.
@@JerehmiaBoaz Misconception. There is no pointer bugs in C. The programmers just don't know how to use pointers properly, or, careless.
@@user-mr3mf8lo7y Pointers means reference semantics and modern languages are shifting towards value semantics instead because it's less error prone and an easier abstraction for programmers to get their head around. Then there is the fact that C offers no method to free resources automatically once they're out of use like Python garbage collection or C++ RAII destructors, so C is rife with memory leaks on the one hand and access to prematurely freed memory on the other hand. Yes, a good programmer can hold all these ownership relations in his head instead of communicating them to the compiler so it can do the work, but it's a hard, error prone, and slow process.
@@JerehmiaBoaz I am afraid that's not entirely true. What pointer reference means is another topic. Not changing the reality that 'there is "no pointer bugs in C, there is only programmers lack of knowledge like anything else". C offers (freedom of) accessing low-level as much as posssible (close to Assembly), obviously, that requires attention. In addtion, garbage collection is still externaly available. Please refer to this secure source: ruclips.net/video/Htwj4jJQeRc/видео.html
13:10 "If something fails, it fails immediately and visibly..." Now that's good engineering!
To all ye who venture into the depths of the comment section, turn back, for there is nothing but darkness and embedded systems programmers.
Edit:
Before I start a flame war, I should mention I’m an embedded systems guy with a love of C and C++ lol. It’s okay to make jokes :-)
:) can confirm. Am embedded software engineer and I look down on your petty Java!
To be serious though Java has it's place and I don't hate it but, I personally don't want to use it. I like pointers I think they're very useful. Memory allocation is very handy as well.
But if you don't want to deal with all that Java is perfectly fine.
I doubt all of them are actually embedded systems programmers. Most of them are probably cs undergrads trying to sound cool.
There are things in C++ that are pretty bad and however you may feel about this C++ 17 did "fix" a lot of issues. I'm very happy that smart pointers are actually useful now. I'm glad I don't have to use boost libraries anymore. Hilariously, because of all of the overhead used to make smart pointers work if your allocating an array your only really saving 17 bytes of memory as compared to just using a vector, also it does depend on the data type and your situation.
If your really pressed for memory you can still use the New and Deletes in your constructors and destructors. It might sound like overkill but when you only have 720k on a device that needs to sip power it adds up.
Also C and C++ can utilize functions that are defined in assembly which is very helpful. Admittedly I'm terrible at assembly, at least for now, but calling assembly defined functions in C or C++ really helps with human readability.
Wasn't a big reason because of the Java virtual machine? You could run Java on any computer as long as it had the JVM and runtime. In C and C++ you have to compile for your specific processor.
Certainly was a big point for me. We needed to run on Windows and Linux back in the 90s.
I learned java when it first came out and really enjoyed programming in that language. But it soon became a very bloated language that took away its creativity in exchange for efficiency. I worked in a biotech company's building across the parking lot of Sun when Sun occupied a single building in Mountain View. Those were the good old days, lol.
Funny thing is there's NOTHING efficient about OOP's forced pointer-chasing. It works against the machine you're programming in every conceivable way. Memory layout should be your first and primary concern when programming, everything else is just algorithms to process and transform that data. You cannot write an efficient program without explicit memory management. Period.
People always complain about the verbosity and bloat that Java has. But when they program large systems in non OO languages, they realise the power of structure. Would I write a AWS Lambda in Java? Not really. Would I write a semi-complex microservice that has a decent amount of code in it in Java/Spring? Yes I would.
I find, the reasons to use plain Java nowadays is disappearing with the rise of Rust/Go, even JS. More often than not, if you want to use Java, you probably don't, it's probably Spring you want to use. I've not used plain Java in years. But when I need to write a service that has a bunch of integration with other systems and need a lot of the fluff around it (audit, logging, tracing, circuit breaking, etc etc), I'd use Spring/Java.
People often turn this into a war of "Java/Spring is bad compared to (insert some uncommon but 4Chan /g/ approved low level language that is complex but they love it because it makes them feel smart that they can program in it)", but often, they forget the last crucial bit which is support. Spring is far more maintained than the Rust framework they are shilling over. And when it comes down to it, hiring Spring Java devs is a tonne easier than Rust/Go.
I come from a C, C++ background who learned the effectiveness Java/Spring has, who also loves writing things in Rust and Go, and front-ends in JS. But I understand the scenarios for each. People need to avoid the "only tool is a hammer, all problems look like nails" attitude.
On this, Kotlin offers Java interop. You also gain null safety. Not the true safety of Rust, since null still exists in Kotlin. But Kotlin is really a modern Java, if Java wasn’t tied to so much legacy.
Virtual threads are also stable in Kotlin (coroutines) that are similar to Golang goroutines. So if you choose Spring over Go and other alternatives, it is worth considering Kotlin JVM
@@RogueTravel I like Kotlin, but again, it comes down to support. Hiring a Kotlin dev isn't easy, not if you're doing non-Android stuff. We had a guy who wrote a bot internally in Kotlin and did a bunch of Spring stuff with it. It worked fine, I can read it and use it and maintain it, but nobody else could unless they wanted to learn Kotlin, which they didn't because their jobs where maintaining a bunch of Java/Spring services.
So again, for me it comes back to the situation of, sure, there's great languages out there better than Java in many ways, but is it worth looking into those when your workforce is getting on find with Java/Spring. Unless it's true legacy like COBOL or Perl, there's no need to rewrite a java/spring service if it doesn't give you any problems and people can still upgrade it and maintain it easily.
I've only been learning programming for about 2 yrs now but the language debate feels like one big insecurity-fueled circle jerk
@@masonmunkey6136 It is most of the time. I'm still baffled when people take a completely conclusive stance on one language and recommend adopting their preferred language which is obscure and not well known.
For people complaining about NullPointerExceptions in Java and saying how it's not an improvement over cpps pointer bugs, let me explain. The NPE and others are exatcly that, something to manage, debug, control and eventually eliminate invalid pointers. It's not a pointer bug, it's a pointer feature, where the concept of a null pointer or in general a null value now became a usefull tool in programming/debugging itself. Rather than silently going on and allowing unpredictable behavior, the JVM will "fail quickly" and manageably if it encounters an invalid pointer but only if it's used in the wrong way.
Nice mental gymnastics. 9/10.
@@iswm Mental gymnastics? Just gave examples of usecases for exeptions, every well designed algo needs to have them...
He developed JAVA in Calgary as far as I remember. He worked for Sun Microsystems and the were located in the Sun Life Plaza, I remember the SE tower. They were a customer of mine. I remember going into Sun and all their computers were running Solaris. My understanding was that their goal was more to do with a programming platform that would run on any machine, Motorola, Intel, MOS, Alpha, etc. At least that's how I remember. My neighbour was a Java Programmer. He made a lot of money and eventually moved away from Calgary over to Boston.
I love java. It was the first language I learned in which I could feel anything is possible.
Cringe
I love Java. I learned 6502 Assembler, Basic, Pascal, 68000 Assembler, C, C++ and then Java - starting with 1.0 or possibly 1.1. Like what you're saying, Java was the first time you could just get on with what you were planning and didn't have to burden yourself with problems that other languages created. James Goslings contribution to the computing industry will not be fully appreciated for some time to come - the fact that Microsoft just copied the idea and called it C# is the ultimate compliment. I never understood why the JVM wasn't baked into silicon - it's been around long enough to be implemented in hardware. I'm sure there's a good explanation.
@Exzavier im in the 4th semester java is utter crap
@Exzavier As usual "it depends". If you want to get closer to the machine and hardware, you should learn C/C++. If that's not a big interest and portability is important then Java is best. I have never liked multiple inheritance that C++ provides. To me, single inheritance that Java provides is a better model (cleaner) and Interfaces provide any other necessary class characteristics. Java GC is a bit of a revelation if you go from C to Java - malloc no more 😃
@Exzavier hi, in my opinion, first learn c++ just because it's harder. Once you are comfortable with c++, java would feel easier.
makes a lot of sense, i would prefer using c or c++ to make a video game or some other single user application but for business things java or a jvm language like kotlin seems better
Why even bother with Java on the JVM, when Kotlin is available?
Just fixed a bug in C that was using a byte for indexing a 256 elements sized array. The idea being that 7-10=253, so the index would always be inside the array of 256 elements. E.G: A=Array[ByteSizedVariable-10]; Turns out the C compiler was type casting the byte to an int before indexing. So 7-10 was actually -3 for the index value.
C compiler did what it supposed to do, you have operation between byte and int so it will convert byte (c do not have byte type it have char) to int and then do operation (cleary stated in standard). By the way char not nessery hold valued of in range from 0 to 255 (char is in that way specil) it can hold that or can hold valued from -128 to 127 or from 0 to 255 and that is open to implementation to pick is char signed or unsigned (standard do not define it that why you have signed char and unsigned char as standard integer types, all other integer types are by default signd).
By the way if you have operation on two char and assigned it to int operation results still will be done on chars and then result which is char (after all operations are done) will be converted to int.
All of that is well defined behaviour in standard, anyone who make bugs like that show he do not know to write in C (and probably in any other programming languages) that is same as someone decide to write book and don't know all letters.
@@mrlazda A byte is simply an unsigned char. The fix was simply to do the math operation first. ByteSizedVariable-=10; A=Array[ByteSizedVariable]; works just fine to index the 256 element sized array. I imagine that A=Array[((byte)(ByteSizedVariable-10))]; would force the compiler to do it properly too (might need more parenthesis though). There is nothing in the K&R book about type casting it to a signed int, doing the subtraction, then using it to index. As for other programming languages, I wrote assembly code for over twenty years and knew all the opcodes for several processors by heart. There is absolutely no context to an opcode, unlike C code. I had a really hard time learning C because it is so context sensitive. Although, I do like the fact that I am over ten times as productive with it.
@@JaimeWarlock there is in K&R but maybe not exact but it implying from it. There is definition that in any math C expand to biggest data type, so if you have char and int both before operation will be expand to int, and there is other part to it array index is an integer number and its size should correspond to the maximum size of a pointer variable on your platform (usually long int) so after operation on int and int it will be expand ro long int and long int would be used for index. By the way it would work by just casting 10 to char ( just do array[bytevariable - (char)10], if you mix signed and unsigned variables of same rank then they are promoted to unsigned)
Whole problem is that C compiler se that 10 as int and not as for us logicly as char (that is documented as I remember).
I can't tell you exactly pages where is that written in K&R last time I read it was like 30 years ago (I switched mostly to C++ just for simple reason that in that time c++ allowed to define variables anywhere not just on beginning on block, which is after C99 allowed in plain C too but in early version was not, it worked in most compiles but was not according to standard) and now will be not easy me to find where I puted it (it is in some box in basement), K&R I have is I think first edition (maybe second I am not sure but it was published 1984 (translation to my language)
"So like we were interested in stuff and like we took epic roadtrips and stuff..."
This is too good: Just like Java itself the creators story has way too much vague and unspecific, unnecessary boilerplate no one needs.
This all makes so much sense now.
LOL😆 Well said
now get back to your mcDonald's work.
I still trust the original wise balance of the Bell Lab guys, for some reason. The fundamental flaws of software programing originate from the hardware architectures we still use. And of course the fact that good programmers don't grow in trees. No language is perfect and Java for use proved to be a very useful language but of course it has many shortcomings.
Reminds me of 1996, my first C/C++ job. Inherited a large program to clean up and debug in W95. Most fun issue: totally random and very infrequent bsod.
Long story short. (I eventually found this)… char msg[3]="ON"; then later, buried somewhere in a function sprintf(msg, "OFF"); then ( tick tick tick tick) 🌋🐵
Regular program cannot cause BSOD on W95. It had memory protection. Only kernel level driver could
“Literal road trips. We’d get on a plane and fly to Japan.”🤦♂️😂
Ha-ha! xD
Thank you James for coming up with Java. Pointer bugs and memory leaks with C are a pain to deal with
Should have just made C better like: add features to confine and track pointer use, add built-in memory tools for diagnosing and detecting memory corruption, stack overruns, pointer misuse, and memory leaks. Developing Java was the "blackjack and h00ker's" option.
The reason we had to rely on external debugging libraries, tools (valgrind), and homegrown solutions for memory debugging, because the language lacked the right features.
Pointers have never been in the problem in c, memory management has always been the problem. For most of us having to keep track of memory usage was always overkill, most of the time your pointers were easily managed its only when you get into address manipulation that it got weird and dangerous. Java solved both problems but it has taken far too long for the jvm to be good and the java language needs a revamp.
Wow. You must know more than James Gosling. We've got a genius here, guys. psyche
@@thecodeninjaeu hahahaha, im always amused that people think that a certain person must be the authority of all things relating to X. Instead of trying (poorly) to insult me why not tackle what i said. Ive used pointers and ive done memory management, pointers were never a problem for me but trying to keep memory managed and released at the right time was. I happily jumped to java because i was sick of issues with memory management but i missed the simplicity of pointers, references only brought a different set of problems, for example even now there is still debate about we are passing by value.
I remember the first iterations of Java that had horrible garbage collection, until the language developers copied how the Lisp language implemented the two kinds of garbage collection (static vs. dynamic). It's a shame Java didn't also copy multiple class inheritance like other languages putting the burden on developers to literally build up their own multi-class hierarchy. Soapbox statement: Lisp language has the best multi-class order dependent inheritance model of any language (even better than C++ inheritance model that isn't inheritance order dependent).
lol, OO programmers are hilarious. pointer-chasing clowns.
Multiple inheritance was and remains a bad idea... Inheritance should be used sparingly to start with, let alone using it to glue random unrelated classes together.
@@davidlloyd1526 Totally disagree from my own experience of developing a graphical planning and scheduling system co-wrote in lisp/clim code back in the 80s took only 32k lines on a Lisp machine and ported within a few months over to Sun workstations and was easy to maintain and adapt for different regimes of activity planning with resource and temporal constraints (spacecraft/DSN scheduling, rovers - was used at Jet Propulsion Laboratory for over a decade on a variety of projects). When they converted it over to Java/C combination the result was over 300k lines of code and and took a lot of maintenance to nearly replicate the functionality.
but still JVM is mainly written in C++ with pointers aboard
Java made a good point of write once and run anywhere. Where the operating system is Windows, Linux, Mac OS, Solaris etc., Java was easy to learn and master and it became so popular that it became a phenomenon!!!👍👍👍👍👍👍.
Title: "was created because of"
Gosling: "that was one of them (reasons)" 10:48
pointers in c feels like the spiderman meme
That’s why C++ introduced smart pointers. They fix most if not all memory related problems with pointers. The problem is the developer not using them.
Still java is easier than c++ and has great libraries and frameworks for web.
No they don't. C++ doesn't guarantee that smart pointers are what they promise to be, i.e. you can have many pointers to something that's supposed to be a unique_ptr, you can have dangling smart pointers etc. Smart pointers cannot substitute for garbage collection. Not to mention that they are much slower since Java's GC has a faster allocator and doesn't have to spend time churning the reference counts.
And the great part about smart pointers is that, unlike in Java, memory is freed up deterministically.
@@egorsozonov7425 if you can't love cpp only means you're not a true programmer.
I've written a lot of C++ code Delphi before that. As long as you have policies in your mind, as long as you have clear definition of ownership, you do not need a GC to write stable code, this however do not hold when you work in a big team with various expence levels. Java is this, a language designed for needs of enterprise with a lot of employees.
Id recommend putting bread in a toaster instead of toast.
Hard to catch bugs are not a fault of the language. They're caused by "creative" programmers that didn't bother to learn the basics
Underrated comment here. The language is never wrong. Every language (and most stack technologies) provides a means of shooting yourself in the foot, but it never forces you to aim there.
If the language leaves room for errors by devs, it is a design flaw, not just the devs messing up.
C++ has been around for over 50 years and these pointer bugs are still there.
You have to be blind to still assume that it's just the devs.
@@metallicafan5447 Blaming a language for bad code is like blaming a gun for murder. You might just be a shit programmer. At least consider the possibility.
@@VoteForBukele Well, the gun has a lot of spikes on it, and so you might hurt yourself unless you are an expert at C++. And as far as I am concerned, there are about 10 C++ experts in the world.
Rust, for example, is much more rigorous feedback on potential bugs than typical C++.
@@Krasbin 10?? 😂 As far as I’m concerned, you need to meet more people.
I only now realize that Java was invented by Santa Claus!
As a current python programmer and former java programmer, I do kinda miss the access modifiers. I know you can mangle names and such and I don't have a technical reason, I just don't like looking at the underscores. Purely preferential. I'm also not a very experienced **anything** programmer, so there's probably something I'm missing.
If pointers are a problem in C++, why not use the high-performance STL containers?
No programming language will correct a bad design.
As a CIO for decades it amazing to speak to a programmer about a problem and watch them run to thier desk and begin to code. Then they come to me with their work and i toss it right in the garbage. I ask them "where is your design?" I get a blind stare. I would ask "does a carpenter build a house w/o a design? Why should you code something w/o a design? Go back to your desk and being back a flowchart, then ill tale a look"
I will NEVER hire a programmer that cant design. Good documentation can't be beat!
1.5 speed recommended
It actually sounds normal at that speed
Or 2x
He loses 20 years at 1.5 😂
I seen Lex's face and it looked like "oh fuck, this is gonna be another fucking boring ass interview with a mofo'er on the autistic spectrum"
"Literal roadtrips: It's like, get on an airplane, go to Japan..." I don't know if he's having difficulty with the word "literal" or with the word "roadtrip", but that's not a literal roadtrip.
Many folks learned the concept of an interface in school, but they typically don't apply it in their code.
When trying to explain an interface, I like to use a tv remote analogy. For a user, the tv remote is the 'interface' to the tv. It has a fixed set of functionality that needs to be tested. If I am programming the tv side of things and know users are only ever going to use the tv remote (hey, it's my analogy so I get to make the rules), then I have a controlled way for the user to interact, which drastically reduces what I have to test and maintain.
If however, the user 'reaches around' and starts messing with the tv without the remote, now I have to test and maintain all that additional functionality, plus have to try to proof against unintended usage.
Using interfaces has another huge benefit - that is, the complete implementation behind the interface can be changed, and as long as the interface signature remains unchanged, the interface user has no idea anything has changed. If programmers allow the reach around into the implementation then it's much harder to change backend code without breakage.
And there’s limitation to your example. Nobody should be writing an interface for one TV remote! It’s called an interface. In common parlance, people expect an interface to be used for 10,100,1000? Consumers? Nothing really more pointless to see interface code for one tv remote.
@@AirborneTrojan I am unaware of any restrictions that require a minimum number of callers before it's ok to use an interface. Perhaps this interpretation is why interfaces aren't used as much as they should be?
@cncsphere, my point is not about technical feasibility, but rather the craft of coding. Ultimately, developers read code written by other developers. It’s a bit over-wrought to see an interface created for one method for one implementation. If you know you’re only making one “tv remote”, is it really worth the effort? Maybe after some time you come back to this code and it becomes more clear that additional interface items and implementation warrant this.
The Java version of HelloWorld made me write my first and the last Java program it print "Good Bye Java"
I think, the most comfortable in this regard today is Python ;-)
Great - except ATMs in 1988 are FASTER than now. All the examples I've seen with JAVA are 'stupid and superficial ' if you use JAVA for a REAL elevator - it will get stuck. To this day ppl hand me a laptop and say the RUNTIME is crashing or super slow.
A superbly flexible, robust and extensive language which can pretty much do everything.
Anything except letting you manage memory which is literally the most important part of computer programming.
@@iswm Yes perhaps thats so. I have used C/C++ and java (currently) for many years. So have had do all the memory management stuff related to C/C++, whilst this is a necessity in C/C++ its nonetheless an inconveneience at best and a real pain at worst to track down the memory related bugs easily. Having used java for around 15 years, I would honestly say I don't miss or need the extra memory management requirements of C/C++. Java did start off as a slow and slightly limited language subset but has steadily grown to be a stable and mature general purpose language, I have used in many different software projects. Yes the compiled languages will have the edge in speed and performance, but I think that advantage has steadily become less of an issue.
8:47 12:11 13:06 17:30
Today I learned that playback speed maxes out at 2x
I like it when he paused for a second to say "one of most popular languages" 🤣
I never understood James Gosling interviews. I know his background: MIT AI lab, Lisp machines, implementing x86 lisp machine emulator to run Lisp text editing macros, that would later become Emacs, then getting hired by Sun as a virtual machine expert. Yet in the interview he rarely if ever mentions this and always pushes a dumbed-down narrative that would appeal to beginner programmers. Maybe it's good from the marketing perspective, but it feels very dishonest.
He's an academic. They don't understand what real-world programmers do. He made Java as a result of his own incompetence and thinks everyone else needs the same hand-holding, completely oblivious to the wants and needs of actual real-world programmers doing actual serious work.
It's wild that we have a video clip of a guy that invented the programming language that I'm currently studying
Why are you telling us that you are wasting your time on the new Cobol? ;-)
"No system of energy can produce sum useful energy in excess of the total energy put into constructing it.
This universal truth applies to all systems.
Energy, like time, flows from past to future".
The problem is not the pointers, but the "programmers" not using them correctly 🙄
Even having smartpointers is of no use if they are the only "smart ones" around...
Mr. James Gosling made greatest contribution in IT. I am a Java developer and I feel very grateful. Please visit Bangalore India.
One mistake cannot justify another one.
You are on 2x, the only video on that speed
what kind of back doors or shortcuts is he referring to in C?
I think he means things like having a pointer x to a struct with some variable y (and nothing else). Now, x.y will be the same as *x on every sane compiler. Now add another variable, say a. Now, *x can be x.a or x.y, depending on the compiler.
Well in c you can have a pointer to memory. Then just start incrementing that pointer and overwrite anything you want.
@@davy360 i have never tried to dereference a pointer to a structure in the way you mention (only because it is illogical). Does GCC allow that? I hope It warns, at least. Otherwise, it would another example of C ambiguity
@@lorenzogcapra try out that sourcecode in different compilers
#include
typedef struct something{
int x;
} teststruct;
typedef struct somethingMore{
int x;
int y;
} teststruct2;
typedef struct somethingEvenMore{
char a;
int b;
char c;
} teststruct3;
int main()
{
teststruct ts;
ts.x = 5;
teststruct* tp = &ts;
printf("ts.x %d
", ts.x);
printf("*tp %d
", *tp); //behaviour is technically undefined, might trhow waring
teststruct2 ts2;
ts2.x = 1;
ts2.y = 10;
int* rougePointer = &ts2; //rougePointer points to whatever the location of ts2 is, the compiler might put x or y there, it's not specified exactly
printf("rougePointer %d
", *rougePointer);
printf("rougePointer + 1 %d
", *(rougePointer + 1));
teststruct3 ts3;
ts3.a = 11;
ts3.b = 13;
ts3.c = 200;
unsigned char* justStrange = &ts3;
printf("js %d
", *justStrange);
printf("js+1 %d
", *(justStrange+1));
printf("js+2 %d
", *(justStrange+2));
printf("js+3 %d
", *(justStrange+3));
printf("js+4 %d
", *(justStrange+4));
printf("js+5 %d
", *(justStrange+5));
printf("js+6 %d
", *(justStrange+6));
printf("js+7 %d
", *(justStrange+7));
printf("js+8 %d
", *(justStrange+8));
printf("js+9 %d
", *(justStrange+9));
printf("js+10 %d
", *(justStrange+10));
printf("js+11 %d
", *(justStrange+11));
printf("loc of ts3 : %d
", &ts3);
printf("loc of ts3.a: %d
", &ts3.a);
printf("loc of ts3.b: %d
", &ts3.b);
printf("loc of ts3.c: %d
", &ts3.c);
return 0;
}
He is a legend in the field of Computer Science. Java is still the best programming language there is. It's so amazing to hear the perspectives of how they went about developing Java as a language. In any large scale organisation Java works a lot better than Python even if you have to write a few extra lines of code because you have really organised and structured way of doing things. Java beats Python any day.
Yes, you need roughly 1000 lines of Java for every line of Python. Other than that it's a great language. ;-)
@@schmetterling4477 Disagree. Most of the IDEs come with auto complete and suggestions features. It's not too much typing at the end of the day. 99% of all developers code on IDE and not on a Text editor.
@@suryac850 You still need 1000 lines of Java for every line of Python. :-)
@@schmetterling4477 No you don't. I think you havent coded in Java
@@suryac850 I have coded in Java exactly once. It was the integration of a complex simulation tool into Eclipse. I even got it to work halfway in like three weeks. That was one time too many, though. ;-)
Java is ok. Not a silver bullet and not my first choice. It's having to use Maven that made me sour completely on it. Yes, I know that's optional but only if you have control of the project which I didn't
Java must be really free and Open Source.
I moved to java from C++ because of pointers was annoying and always used to get confused.
The problem isn't pointers. The problem is that you are an incompetent, confused programmer. And now your code just sucks in a different language. :)
Oak may have been created because pointers are hard... but Java was created to program TV remotes.
Pointers aren't hard. Academics are just incompetent.
@@iswm Why are you telling me? Go back to 1991 and tell Gosling.
Java:
Fred: "Alright gang, let's see who this ghost really is!
FORTH:
Jaw dropping 😮moment really when he says at 16:45 giving an example of a banking software called account reconciliation system because that’s exactly what I face and work on everyday and that’s exactly is the issue that I see hence we are trying to move on to Java or other language.
I have no idea where he was going with his road trip to random companies stories and the problems in computing and how that lead to Java. The interviewer cut him off and directed him to talk about pointers in C.
Been programming in C for 3 decades and am happy for it; I manage memory and prevent dumb sh!t with pointers by being a diligent and disciplined programmer. I think that mindset is what's missing today.
that‘s absolutely true! i wanna be a programer and don‘t want to touch the hard stuff, memory management. it‘s like working in a car repair shop, but don‘t want to get dirty hands…
the most tolerable c programmer
@@jolterolidude2115 Honestly? I don't blame you one bit. Some days, like yesterday, in fact, dealing with the bits and bytes of data packing and extraction with addresses (sometimes) unknown can veer one toward madness. Fortunately, my wife is a willing sounding board, otherwise this comment would read more like skark who and something balls do I w3ll 0r mmm... wiggle-WIGGLE, I SAID!!! 🤪
@@JoJo2251221 Did mean to offend. But the reality is that whatever interpreted language you're using, you're relying on the work of people like me. *Somebody* has to wrangle those pointers. You just hope that they get it right, because, if they don't, you're screwed.
@@JoJo2251221 Oh, I guess I should provide some basis for my original comment. Largely, I'm looking at webdev, more so than just the heavy reliance on interpreted languages. How many frameworks does one need? It's insane that what's good today is considered "crap" tomorrow. Much of the development anxiety that some programmers feel can be attributed to the pressure to keep up with the latest buzzwords... but it's all just a bunch of Band-Aids slapped over a core wound.
I hope Ryan Goslin will come back soon.
What does he mean by freeing memory but continuing to use it? Can someone give me a couple of examples and how it’s applied?
Gosling: Thanks for giving credit to Simula, real class act from you.
Listened for 8 minutes and he didn't say anything. I give up.
I felt Gosling spent the first 7 minutes describing things that Java never has and never will excel in.
Looking at what Java is today; I can't shake the impression that the goal was (and always has been); lowering the level of difficulty for OOP by abstracting it from the underlying system (including memory management). Essentially reducing software engineering to be purely about logic and not about electronics. Not sure what all that other talk was about.
Speaking of programmers making the same mistakes that have already been solved, it still goes on today. The industry as a whole doesn't do a great job of passing down knowledge to the newer folks.
If you can't handle the power of C++ but wan't OOP with an old language, go with Java
And hate c# with a passion
@@gitgudnga Never used it, is it like C++ with a garbage collector?
Or use new languages like Rust or Julia, which offer close to C++ speeds. Oh wait, Julia isn't even object-oriented!
@@wyqtor I will not betray my baby
@@kwirny its like java but more polished
OK, the pointers leave the scene, and the infinite .class files arrive....
Working as an engineer for over 20 years I can not stand arrogant people who think they know better without digging into the details. There is always a good reason things have been implemented in a certain way, the devil is in the detail. We have moved from c++ to Java and I can tell you Java sucks big time when it comes to performance critical tasks. We had to use a lot of JNI tricks to bypass those issues.
I like the concept of Code reusabillity of Java because of object oriented nature. The inheritance combined with compostion helps to reuse the code through code extension. For instance in Decorator pattern you can extend the functionality of a already written code through the usage of inheritance and composition. Imperative or functional languages do not give you that advantage. All the cloud application or big data applications usage java a lot. In additiona memory utility much safer. Off course every language has its own utility
He should have made a safe version of C++, no need to reinvent the wheel
Java is my all time favourite language. Javascript, my all time least favourite language. Java with IntelliJ was pure heaven for me.
For all the hate toward Java's ugly verbosity, it's a remarkable language that was in the right place at the right time and earned every bit of its success.
Programmers place too much value on how fast the can write buggy code. Slow down a bit, be a little more verbose for clarity, and do it right the first time. It saves time in the long run.
And it still has pointers under the hood because it’s a fundamental thing to the hardware to have them.
Btw. C and C++ can be run cross platform with a library like Cosmopolitan..
And oh gee, it still didn't fix the issue of null pointer dereferences. OOP has been a 40 year waste of absolute futility and has irreparably harmed the entire software industry beyond repair.
@@iswm The problem is though not nullptr’s it’s the constraints which aren’t dealt with correctly even today. There isn’t a single language that was able to fix this without throwing away freedom of design or iteration times.
And yes OOP was and is always a mess where people invented things that no one did before like managers ..
I can’t believe he didn’t mention smalltalk.
Surely it's to have platform independent code.
Yet another talk that convincing me to really start learning Rust :)
you will throw that idea when you will look at rust syntax
No more pointer bugs, now a billion dollar bug, the NullPointerException
If you are writing pointer bugs in modern c++ you don't know what you are doing. Together with static analyzers, general programming expertise and smart pointers, pointer bugs should never occur. On top of that, you can write your own allocation tracers to catch memory leaks in production.
I just hate the argument that C, and in particular C++, are these unsafe languages that nobody can work with. It honestly sounds like language propaganda.
Programmers who are scared of memory shouldn't be programmers. Memory management is just as important as the cpu instructions, yet everyone wants to sweep it under the rug and pretend it's some impossibly dangerous undertaking to reason about how your data is stored. So instead they chase pointers around endlessly thrashing the cache and absolutely killing the performance of anything they try to write. Then they have the gall to argue with anyone who actually knows what a cache line is about how burning hundreds of cycles every time they do a load is somehow a good thing.
I feel like the title of this video looks weird from a non-programmers pov
great questions lex