More Swift videos are needed! To answer your question in the video. Yes, you can write Swift anywhere using whatever tool you like. Contrary to popular belief it’s not an Apple only language and you can use it anywhere from Windows to Linux to the web. You can write in Vim and also use the LSP for some really nice language features. (:
I learnt Swift because I am an Apple fanboy but I still hard to find a use case outside of SwiftUI, I am pushing myself to use it after college and get a job, I really enjoy the idea of server-side Swift! And I feel that learning Java really helps me understand Swift better
@@zadintuvas1 You don't get the portions of the ”standard library“ that are part of the Foundation module- the stuff that's Swift wrappers for Objective-C stuff. But IMHO that's a good thing, since those Obj-C-bridged types & functions can cause unexpected performance, memory semantic, & multithreading issues (everything but primitive value types is an object in Obj-C, like Ruby and Smalltalk). I *wish* for iOS development we could set a flag that disallows pulling in all the old Foundation stuff by default because of how much code reviewing and fixing I have to deal with at my job when code is written by devs who used to do Obj-C and don't know the difference between Obj-C-bridged and pure-Swift.
8:29 Swift collections are pass-by-copy, _but also_ Swift has good copy-on-write analysis by the compiler. So a pass-by-copy when the caller never uses that value again (only the callee uses it) is often a fast pass of ownership without the programmer having to annotate or do anything extra.
@@TremereTT i'm not terribly familiar with Delphi or Object Pascal, but yes, looks similar in some ways (it's notable that Object Pascal originated at Apple with Larry Tesler leading the team). So it seems some of the same reference-counting ideas also made their way into Objective-C, but around 2009 Apple went hard with ARC in Objective-C, updating and converting the language to use ARC exclusively, eventually all-but-deprecating Manual Retain Release. Then in 2014 Swift 1.0 was released and inherited Obj-C's ARC memory model for all object types (and COW for value types).
@@CapnSlipp Well at least in Freepascal and in Delphi it's COW + ARC, yet you can also ALLOC all those data structures your self and they will then need to be freed by you and they will no longer show COW behaviour then. This is usefull if you want to use Records and Recordreferences to access protocol headers or file headers in Bytestreams ... just like in C.
In swift all collections are basically classes that are wrapped in a struct ! When you create a copy of a collection, swift just gives you a reference back to the original object! as long as you don’t try to mutate that object it will remain the same ! The second you mutate your copy it will create a fresh, stand alone copy of your collection and lets you mutate that one instead
Swift collection types (array, dictionary, set, string, etc) are copy on write, but this is not built into the language itself; they are implemented in the standard library that way. In essence, they are reference types under the hood, but before any mutation, the reference count is checked, and if there is more than one reference, it makes a new heap allocated copy before performing the mutation.
A typeclass in Haskell is not the same as a class. It's closer to Rust traits; a family of types for which certain functions are defined. The closest thing to a class in Haskell would be a Record (essentially a struct with godawful syntax).
@@m4tt_314 i haven't really played with c++ concepts. i used it once when making my own shitty std::map, but that was just using a built in partial ordering concept i think. i think i'll look into them now.
10:53 _”I'm curious how you do like shared references, ones where 2 functions can mutate a value without copy on write.“_ A couple of options: 1. Pass by reference (&-prefix) to a function with an inout arg (like C# and many other languages). 2. Much more commonly, wrap your value type instance in an object type (a class, as opposed to a value primitive type or a struct or enum, like C++ and many other languages). 3. Use closures to get/set the original value. A bit esoteric, but sometimes this is the mechanism under custom type-erasing types or custom property wrappers attributes (again, drawing some similarities to C#).
8:29 Swift makes a distinction between value types; struct and reference type; class. Basically you can do both heap and that other thing. Borrowing approaching recent releases of the language.
One thing I like about swift is how the language can constantly offer features to you that you need, but not flooding them all to you on day 3 of learning. On the other hand, if you suddenly need some advanced feature, it is difficult for you to know what are the prerequisites because it just suddenly feels like a foreign language
i remember starting out with playgrounds many years ago, when I had first started coding... I thought I got it! and then recently I started using SwiftUI 😵💫. Suffice to say i did not get it
When you start searching for Swift, both the content creators and the tooling seemed to be focused on publishing to the App Store. There is movement on the server side, but it doesn’t bubble up to where people can feel it.
You will find the most productive software engineering languages are platform specific be it C#, Kotlin or Swift. Try using a non specific platform language for software development and you will defund them to not be the best choice for software engineering.
3:00, in low-level, you can chose any way you want to pass. The only barrier is that, when passing by copy and wanting to edit the thing, having the changes valid for the original, a 2nd step must be done later: to copy the copy back to the original. 12:05, just like C++: full control of who changes who and who can be changed.
To complicate things further, Python does neither pass by value nor pass by reference, but pass by assignment. That entails behavior like this: a called function can mute mutable objects (like lists) of the caller by using object functions (like .append or slicing), but not by making a new assignment to the parameter: ```python def alter_list(mylist: list) -> None: print(f"got list: {mylist}") mylist.append("d") mylist[0] = 42 mylist = ["x", "y", "z"] print(f"muted list: {mylist}") mylist = ["a", "b", "c"] print(f"list before passing: {mylist}") alter_list(mylist) print(f"list after passing: {mylist}") '''output: list before passing: ["a", "b", "c"] got list: ["a", "b", "c"] muted list: ["x", "y", "z"] list after passing: [42, "b", "c", "d"] ``` but immutable objects like strings will not change for the caller: ```python def alter_string(string: str) -> None: print(f"got string: {string}") string += " was concatenated" print(f"muted string: {string}") string = "this string" print(f"string before passing: {string}") alter_string(string) print(f"string after passing: {string}") '''output: string before passing: this string got string: this string muted string: this string was concatenated string after passing: this string ''' ```
In deep learning context tensors are n dimensional arrays BUT what separates them (and what the degen twitch chat did not mention) from lets say vanilla Numpy arrays, if we are in python land, is that tensors have the additional machinery to be part of a computational graph and gradient computations in the graph using automatic differentiation that all deep learning libraries usually implement. For example in Pytorch all tensors have requires_grad attribute to imply if gradients should be calculated for said tensor.
Not necessarily true; e.g. the xtensor library in C++ is essentially trying to emulate numpy. AFAIK it has no autograd. The _real_ difference between n-dimensional arrays and tensors is analogous to the difference between an array with 2 elements and a vector in the 2d plane. An array with 2 elements is just a pair of arbitrary numbers, but a vector must transform in a very specific way when you do coordinate transformations. For example, if you rotate the coordinate system counter-clockwise by 45 degrees, the vector at (sqrt(2), sqrt(2)) will be transformed into (2, 0). neural net practitioners decided to call arbitrary n-dimensional arrays 'tensors' because they don't understand the above. I don't really care that much, call it what you like, but if you look for an intentional difference between 'tensor' and 'nd-array' in the machine learning world you're likely to end up confused.
@@isodoubIet Ok, sorry to you good sir for being too absolute in my wordings. I tried to give a practical explanation that relates specifically to the topic of this video and deep learning frameworks in general and not how they are viewed in differential geometry etc. since the connection is pretty loose I would say. The _real_ difference between n-dimensional arrays and tensors is obviously a skill issue.
@@burarum1 That's what I'm trying to say, there is no difference in the machine learning world. People just give these names and there isn't much rhyme or reason to it.
Granted I last took intro to CS 19 years ago... but do JS devs really not know when a value vs a reference is passed? I have a hard time believing it's even possible to program without keeping track of your function calls. Honest question: does the language even allow values to be passed? Can the value be a pointer?
For your question, no you cannot implicitly pass by value that can be done by optimization but not by you. As for pointers they are implicit for any non-primitive data, not really something you control. Maybe if you count the typed arrays and buffers JS offers?
Javascript is pass-by-value. Objects are weird in that they're passed by value, by reference; you can reassign an object argument within in a function and modify that variable without altering the original object. But you can also modify parameters on the argument object directly and it will affect the original one.
@@fenndev that’s not how pass by value works lmao. Everything is passed by reference. The variable re-assignment you are talking about is reassigning the variable reference
technically, and this is where the confusion comes from, "pass by copy" is most often called pass by value, for example JS is a pass by value language, but the difference is in what that value is, where in JS that value is what the language treats its data types as: primitive types are copied when passed by value, while non primitive types are treated as pointers that are automatically dereferenced when used, i.e. you don't need to write * before an object when accessing values inside of it, like with C++, so what gets copied is that pointer, so both outside the function and inside of it, you have two different variables that point to the same object. languages like C# are also like that but it also provides you with modifiers that can change how things are passed into functions, like the ref keyword which end up creating a ref (explicit pointer) to the variable passed in, which can be a primitive type or not, and in the case of a non-primitive type you get a pointer to technically another pointer, instead of a pointer to the same object, meaning it's something like temporarily moving the variable into the function while it's executing.
It’s always real fun when data scientists create cool bugs when dealing with pandas dataframes, passing them around like everything is a copy. I was one of those data scientists once. Luckily I am naturally curious and also once dabbled a bit in C after which everything got a bit easier. But the minimum is to once read a blog entry on shallow and deep copies in python.
I recently wrote some numerical code and I tried to understand why it didn't converge. So I added some statements to compute some helpful intermediate values and print them. Doing so crashed the code. That's when I remembered Python is a garbage (collected) language and that basically everything is passed by value. Yeah, wasting a few hours on that wasn't fun. Knowing C++ definitely help debug that issue but it was still a pain. I'll never understand how people that decided that pointers were too complicated also decided that the best way to get rid of them was to make nearly all variables pointers in disguise (with even the ability to be set to a null pointer)
3:40 Wait what? As computer scientist i'm confused by that statement, why is CS dying? In Italy the people who start a cs degree are greater and greater each year, my courses didn't even had enough space for all the new students, and they are hired sooo much faster than the one who don't have a cs degree, i strongly disagree with that, and if someone's a programmer and he doesn't know what passing by value vs passing by reference means, he's taking the wrong choice of life. And btw, yes, in CS courses they teach what passing by reference and by value means, because we both study the LOW level and the HIGH level of programming languages, the full stack, and we all say whenever a language passes by reference and by value and in which cases it does that. Python also uses a thing called caching, also java uses that, basically whenever you say variable = 5, it keeps a space in memory containing the number 5, whenever a new variable called variable2 = 5 it's declared, python uses the previously space in memory to avoid instantiating a new space in memory and keeps track of the variables that reference this space and how many variables do that, yeah just like rust pointers. Java does that for numbers lower than 256 (don't remember the exact number). Whenever we say that Java always pass by value it's true, but you basically "pass by value" the address that it's inside the reference, they're using pointers without knowning it. I'll try to explain better: Let's say i have a class Home, and an object Home home = new Home();, whenever i pass the variable (reference) home to a function, i pass the address of home as value. If people start studying C and what a pointer is, they would actually understand all of that pretty easly, a cs student has hard time understanding that concept right there, simply put, there are cases in wich you can do both thing, value or reference, and cases in which the language does all of that under the hood for you. The language that allows to fully understand the whole passing by reference and by object in my opinion is C. Also we need to explain something about Python, everything in Python is an object, that does not happen with Java, primitives in Java are not Objects, and primitives in Java are passed by value in the pure sense of the term, meanwhile references to Objects in Java are passed by value, not the value of the Object itself, so also Java has the real pass by value thing, with primitives, and I still think that a programmer *should* know that
@@luca4479 In Italy we have 2 different degrees, Computer Science and Computer Engineering, the first one teaches code, with just ONE exam about Hardware, and 5-6 exams about high level programming (ALL of Java, the whole thing), and low level programming (UNIX: Posix vs System V, C, etc.), the second one, engineering, it's basically the opposite, low coding, a lot of engineering, saying i can understand how it works just because i know how it works behind the scene doesn't make sense to me, cause you can understand how to use the language without having to know how it works behind the scene, i'm still confused. An Engineer would know so much more about the low levels (plural) of the Hardware, i know what i have to know to understand how the language works, but i don't think neither CS students and programmers should be confused by the whole passing by reference vs value thing.
@@s3rit661 I think his more general, perhaps unstated, point is that CS is no longer THE way to get into coding as a career (at least not in the US). So a lot of people are getting into it without ever having gotten that fundamental knowledge about low-level details. So they're like "pointer? huh?" and their understanding of reference vs. value semantics is largely limited to "oh, I must remember that if I update an element in this array, it will change it everywhere." They don't necessarily know why that behavior happens, or any sort of general rule about it, but maybe just think it's a special case.
@@kb_dev What that does even mean? When did i ask you to wait for me to work on something? Saying "Wait what" it's a matter of say, it's like saying, "wait a minute", it's like saying "i don't agree with that", i'm not english native, so i always thought that was the meaning of it, at least from where i heard those words
I've interacted with lots of CS students both domestic (US) and international in my current MS CS program. There are lots who had departments which taught them everything in a single language and they never learned anything from outside the paradigms of that language. Often Java or Python depending on the department. So they're generally very good at that one lang they have experience with and OK with theory, but have absolutely no experience translating their theory understanding to a different language or paradigm. IME these people often come out of large schools whose faculty aren't teaching-focused and want to enforce a very specific knowledge-path through their department. I can definitely imagine many of this type of student having no idea how anything low-level works beyond possibly a vague definition. You then also have people developing software out of bootcamps who likewise only know how to do one specific thing; the "React Andy"s that Prime often mentions. CS as a field definitely isn't dying, but there's often a significant knowledge gap between inexperienced and more experienced developers and especially between theorists and software engineers, with many institutions confused about where they want to fall along those axes.
Java's new Vector APIs will now create SIMD (AVX2, AVX512) instructions under the covers (if available) with huge performance gains for graphics and large scale computations. There are only a few compilers that have support (GCC and VStudio does though). Java will reuse strings via the String Pool. The JIT will only reference them once. Groan (new Java): void main() { System.out.println("Hello World"); } These people need to keep up!
Swift has the nicest closure syntax of any language I've used. Especially how it gives you implicitly defined parameters like $0, and $1, etc. Makes it really compact to just map to some property of the objects in the list.
@@isodoubIetwhy tho? You either assign it to a variable and pass like a regular argument or you can use a shorthand syntax which is far better than any other language.
@@isodoubIetswift has literally built a UI framework, regex framework, I have used a parser framework and a html/template frameworks based on function builders which are based on shorthand syntax. Like if you don’t wanna use it, nobody forces you to. Btw what a heck is a command in programming? Never heard such a concept. Is there a language which has commands?
@@mamai_eth That makes zero sense. A shortened syntax is not responsible (and not even important) for any of these things. "Like if you don’t wanna use it, nobody forces you to." Exactly, I will never use swift partly for this reason. If you meant "don't use the construct" instead of "don't use the language altogether", you should know better. "Btw what a heck is a command in programming? Never heard such a concept. Is there a language which has commands?" It's the same thing as a statement. Don't be silly.
Swift is nice, the unfortunate part is all the legacy Objective-C code you need to interface with when developing for iOS. At least that's how it was when I tried it years ago.
Still happens, but not as much anymore. A lot of the APIs are re-written in Swift (Foundation, Networking, parts of UIKit etc.) while others are still Objective-C (Cocoa, CoreGraphics, parts of AppKit, etc.). On top of that Apple also has some open source projects on GitHub that further provides rich Swift APIs.
I didn't realize the danger of passing an object as an argument is actually by passing the reference value (address/pointer) and any changes will mutate to the root of the object until 2 weeks ago at my college was talking about this in detail during the Java lecture. Shamefully enough I was working as a JS dev full time and I didn't know that until now
It's not even your fault, it's the fault of the tutorials that didn't teach that, or yours to have taken the wrong one, but at the end how could you knew which one was the correct one
It's called shallow copy (passed by ref) and deep copy (actual clone). In JS mostly you have shallow copy unless you convert object to JSON and then parse it to another var. Or use structuredClone().
@@s3rit661 I did a bootcamp and they never go deep on that topic. People say colleges are useless, but I relearn lots of fundamental knowledge from college, and it is just 3 months in! The knowledge is useless until you reach beyond the surface
Yeah that's one of the pitfalls of starting with a high-level language. They all try to shield you from having to think about it, but you always end up having to think about it.
I love how prime discusses with his chat what Swift does even though no one there knows what is going on, and to finish it the guy in the video doesn’t do the greatest job at explaining what *type semantics* is even supposed to mean
Here's a very basic, oversimplified explanation of what a tensor for programmers without all the "formal mathy" stuff: You can think about a Tensor as a multi dimensional array, and the "Degree" of the tensor, is the amount of indices the array has, NOT the amount of items in the array. Here are a few examples: - Tensor with a Degree of 0, is just a single number (no indices). - Tensor with a Degree of 1, is a basic array, since it has 1 index, notice that it does not mention how many items are in the array, only that it's a 1 dimensional array. - Tensor with a Degree of 2 is an array with two indices, which is basically a matrix, again, it does not mention how many rows or columns, just that it has 2 indices. - Tensor with a Degree of 3, is a 3d array, so 3 indices. And so on.... And there are rules on how to perform mathematical operation between tensors with the same/different degrees. So it allows you for example to "multiply" a 3d array with a 2d one, or a 1d with 2d, and so on, in a way that makes mathematical sense and is consistent. Again, there is A LOT more to Tensors, and this is super simplifying the topic, but this is pretty much all you need to know for computer science. P.S. Tensors were mostly just a "pure math" thing, until Einstein used tensor analysis for EVERYTHING in General Relativity which popularized it made it a vital tool for any physicist since then.
CppCast had Dave Abrahams as a guest after he had worked with creating Swift, when he moved to Adobe tech lab, and he talked a bit about that stuff - getting defaults right etc. He pretty much started the talk about the importance of const things in C++ world I suppose. It was pretty fascinating.
In case anyone wants to know the details of Swift's CoW, here's what I know: Generally speaking, for any value type (any structs or enums), if you make an immutable copy of it, (like if you pass it as a param to a function, or do `let arr2 = arr1`), the compiler passes a reference to that instance. However, if you make a mutable copy, (`var arr3 = arr1`), the compiler makes a copy of it and send it. You can neither modify the value of an immutable value-type property nor mutate any of its members. Swift's collections are value types, so they're passed by value, but they have copy-on-write semantics, and that's handled by the implementation. So, if you make an immutable copy of an array, then you're sharing the reference to the instance itself. If you make a mutable copy, then a copy is made, which is, in reality, just the reference to the underlying storage. When you get to actually mutate the values within the array, a copy is made of the underlying storage, and then the copy gets the changes.
When prime learns Mojo's language features... Imagine this same dude learnt all of the lessons fron building LLVM, Swift, Tensorflow XLA "generalization" (which became MLIR as far as I understand), and then went and built a programming language that mixes Rust, C++ , and python concespts with a pragmatic take and letting the programmer decide how to use the features
Swift 1.0 came out on September 9, 2014 for macOS, and Swift 2.2 released with Linux support on March 21, 2016. Native Windows support (as opposed to Cygwin or WSL) came along in Swift 5.3 on September 16, 2020. So (unless you're talking about native Windows support), 1 1/2 years from initial release for a language now 9 years old isn't very long.
@@CapnSlipp it's not very well known that it's cross platform. no big (non-mac) projects really use it or anything. also, like it or not, native windows support matters a lot
There are zero good cross platform languages in software engineering. You are either producing for windows, Apple, android or some other niche system. Apple is never going to be Windows or Android.
@@ivanjermakov A VW Beetle can actually drive around a track but does not mean it can compete in Formula One. Just because a language is capable of some type of of systems usage does not mean it is suitable. Python is a fantastic language but it rarely gets used to build application software. I cannot see Swift getting used on windows or android for development.
A tensor is a multilinear map between the product of dual of a vector space V with itself p times and the vector space q times to the field of the vector space V. By the way
that might be case in some physics or other field. in data science, tensor is just a n-dimensional array, usually n>2. for n=1, it’s normally called vector and n=2, it’s normally called matrix.
Well I don't buy your philosophy. This is the definition of tensor, someone can think of them as "fancy n-dimensional tables" but he doesn't get the whole picture. Of course it's completely fine if a programmer doesn't know the general definition. But in my opinion if you know it you can't say that they are a different thing if viewed by different people. In fact why they call them tensors while they could call them "n-dimensional arrays"? Because they are an istance of a bigger structure. Anyway mine was just a "by the way" joke, I don't see how many of you didn't get, it's a comment under a Primeagen video, the "I work at Netflix btw guy" hahah@@bbajr
Did Apple finally fix XCode or something? Because back in 2020, when I was trying to get started with iOS development, working in XCode didn't feel like easy mode _at all._ The developer experience was closer to playing a Soulslike in Early Access.
I've been working in Xcode for over a decade, so I couldn't comment on it it's been "fixed". I've adapted to the way it works. Someone who's not using it all the time might feel differently. But Xcode & Swift aren't the same thing. @@oleg4966
@@oleg4966Mobile dev since 2016, iOS only since 2019 here: Xcode has been nicer since 14.3.1, I’ve heard there have been some big improvements in Xcode 15 as well. Although I found Xcode to be intimidating with sharp edges at first, once you learn how to navigate those (knowing when to clean your project vs delete derivedData, manually fixing pbxproj merge conflicts, learning to NEVER use playgrounds or Storyboards) it’s actually a really nice & thoroughly integrated IDE experience.
You pass by value in case of the objects as well, the value is a pointer, so you copied the pointer and passed a copy of it. But as both the original and new pointer and pointing to the same memory address, you'll edit the exact same object inside the function which is in turn a call by reference.
I love Swift too. While Apple actively encourages developers to use Swift for everything, Apple's main focus is adding new features that will benefit Apple platforms. They are always adding new features every year, but I would love for them to just take a break on adding new stuff and focus on bringing Swift on Linux and Windows up to parity with Mac. They are obviously pushing the C++ interop thing hard, but they do need to improve the tooling cross-platform, especially on Linux.
@@mistymu8154 totally. Although they needed many iterations to get the language to mature and to iron out the kinks. Now is a great time to make it cross platform properly.
@@Art-is-craft Swift is open source. Also Apple is a consumer hardware company, they compete with neither commercial software companies or open source operating system collectives. They make their own devices and they make the things that run on those devices. That's it. They don't have to compete with anyone who makes software and to think they do is such a fundamental misunderstanding of technology I have to wonder what you're doing here at all.
4:30 me and a friend argued about argmax i kept telling him it's mode of a continuous distribution he kept saying it's the index of highest number in an array i was confused.
I'm calling BS. I've met plenty of CS grads that don't understand the difference. I'm self taught and I learned all o this stuff on my own back in the 90's. It's not about BS degree, it's about the person. Some of the best programmers I know do not have a CS degree and didn't need spoon feeding. Swift is a nice language. With the most recent version, it has nice support for serializing/deserializing JSON data. Back in Swift 1-2, only having dictionary support for JSON data kinda sucked. Swift definitely fixes a lot of ugly bits of Objective-C
All cs degrees nowadays face both low level and high level languages and uses C as a way to teach the concept of reference and value, i've seen the program of 4 different cs degrees here in italy of 4 different unis and all of them teach that
Python pass by reference and mutability gave me trust issues in early University, particularly when copy and deepcopy often wouldn’t actually allocate a new list/array 😂 I still haven’t encountered a better solution for force allocating a new collection than doing copy_of_foo = foo * 1
So in swift a data type is either pass by value, or pass by reference, and never both. This is quite silly, as this means I have to encode the pass semantics of a datatype in the datatype itself. I much prefer encoding the lifetimes...
Totally doesn't matter but I really like in Swift being able to write a list of lets or vars without re-writing the let/var. It's silly but I miss it when I switch languages. I think the only other language compiled language I could do this in was Nim?
Haskell slow? When did that happen. I thought it was marginally slower than c . Its a compiled language it's not like an interpretated language. I think it test it is slightly slower than c and like 30 to 40 times faster then the SLOW languages.
"I know all those words, but I don't know what Swift is trying to do" - Ah, the eternal problem of programming, we have so much jargon but it's so context dependent, that ofthen it becomes useless.
I guess this is why I never commented any (javascript) code, because I had the nagging feeling that there was something like ownership out there which was so important and impossible to infer from code in a typical javascript project, that as somebody looking at a codebase for the first time, you want most of the comments to be about that. Just kidding, tomorrow "code that has to be fast because while it runs it has locked this or that resource" is the next big thing to comment and I'll throw out all my ownership comments, better stick to not commenting at all and have copilot/chatgpt explain it (?) Btw I kinda like that when using array.sort, that seems to hint at that sort will change the array, whereas array2=mysorter(array) should probably not change the array or otherwise it deserves a comment.
"Pass by copy" that ISN'T a synonym for "pass by value" is quite possibly the most useless CS term I have ever heard. A CPU literally ONLY copies, the "mov" instruction *_is_* a copy.
I thought in this day and age CS basics to a certain degree would still be common knowledge even if you didn't study in traditional post-secondary institutions. I wonder if the bootcamps are weeding out the a lot of the important stuff to keep the attention span of the new learners. It then comes back later like a slapshot to the shins.
You are assuming that it is taught in traditional post-secondary institutions. It is not. Not reliably anyway, and certainly not in the 100 or 200 level classes. I was in my discrete structures class (probably the most useful class if taught well), with some comp-sci students. They were on their 4th semester of comp-sci and struggling to understand why a C++ function took out parameters as pointers. I had noticed the students finishing 2 semesters of comp-sci didn't have a clue how to actually program, so I'd actually signed up for the CS101 at the same time as discrete structures. At this point, if you are going to university to learn to program, my general recommendation is to get a math degree. You can learn everything the CS-half would teach you in a long weekend once you know the math, but the inverse is not true.
Legit my first programming class in my freshman year was intro to programming with C, and one of the first things we did was create functions that used pass by value and pass by reference, so there is definitely institutions that teach this stuff. Perhaps they could be doing more but this is extremely basic and common knowledge
This is great and all, but are we not going to mention that PHP is great? Like guys, you can write PHP and it will instantly work on your web server!! OH SHUCKS. It gets me excited just talking about it
I will admit to not understanding Swift after using it for a week, but I still hate it. I hate xCode; it's beyond slow and has no features. Java IDEs, either Eclipse or IntelliJ, destroys xCode. Wrapping and unwrapping as a core of everything? Not knowing the type of variable as a feature? Not passing by reference or value? The syntax is beyond ugly, but I am still stuck on, let's first unwrap this and null check it all at the same time. Error handling? you think ? ? ? ? ? is error handling? Let, vs var, why? Class vs Struct, why? renaming everything, it's not an interface its a protocol? It's not an attribute it's a property wrapper? What is this obsession with WRAPPING things? Generally WRAPPING objects is ANTI PATTERN but in Swift it's a core feature?? The F? The entire thing feel juvenile and wrong. Java sucks because you have to type "public static void main String args" ONE TIME for every application???? ONE TIME?? That's the problem with Java? Really? Then proceed to create the worst syntax of all time? " var body: some View { Text("Hello, SwiftUI!") } -> " Are you kidding me? "some" view? SOME as a type? GTFOH Swift. Apple, Just use Java for Christ's sake, or Dart, or JS, but not C#, that shit is for bitches.
as Swift and Python programmer, I slowly tend to to use conventions that are more common in Swift, one of them being that I try to pass value types to functions (even use `copy.deepcopy`). I imagine this would help much if I need to work with Python without the GIL
There's been Swift for TensorFlow, where they implemented differentiable programming right into Swift. They later split into two companies, one of them is PassiveLogic and Modular (where Chris Lattner, the guy in the video, is currently). Watch PassiveLogic's launch event, especially the part about Differentiable Swift: ruclips.net/video/xr8sV3GdhVk/видео.html
More Swift videos are needed! To answer your question in the video. Yes, you can write Swift anywhere using whatever tool you like. Contrary to popular belief it’s not an Apple only language and you can use it anywhere from Windows to Linux to the web.
You can write in Vim and also use the LSP for some really nice language features. (:
Isn't the available standard library much smaller on other platforms? I think I heard that this is the case.
@@zadintuvas1yeah. Tool support is low
I learnt Swift because I am an Apple fanboy but I still hard to find a use case outside of SwiftUI, I am pushing myself to use it after college and get a job, I really enjoy the idea of server-side Swift!
And I feel that learning Java really helps me understand Swift better
@@zadintuvas1 You don't get the portions of the ”standard library“ that are part of the Foundation module- the stuff that's Swift wrappers for Objective-C stuff. But IMHO that's a good thing, since those Obj-C-bridged types & functions can cause unexpected performance, memory semantic, & multithreading issues (everything but primitive value types is an object in Obj-C, like Ruby and Smalltalk). I *wish* for iOS development we could set a flag that disallows pulling in all the old Foundation stuff by default because of how much code reviewing and fixing I have to deal with at my job when code is written by devs who used to do Obj-C and don't know the difference between Obj-C-bridged and pure-Swift.
@@CapnSlipp I believe there’s also a roadmap to have a new Foundation for Swift, and I think that one isn’t reliant on ObjectiveC.
8:29 Swift collections are pass-by-copy, _but also_ Swift has good copy-on-write analysis by the compiler. So a pass-by-copy when the caller never uses that value again (only the callee uses it) is often a fast pass of ownership without the programmer having to annotate or do anything extra.
Bingo!
so swift cow and arc works like Delphi records(structs), dynamic arrays, strings ?
@@TremereTT i'm not terribly familiar with Delphi or Object Pascal, but yes, looks similar in some ways (it's notable that Object Pascal originated at Apple with Larry Tesler leading the team). So it seems some of the same reference-counting ideas also made their way into Objective-C, but around 2009 Apple went hard with ARC in Objective-C, updating and converting the language to use ARC exclusively, eventually all-but-deprecating Manual Retain Release. Then in 2014 Swift 1.0 was released and inherited Obj-C's ARC memory model for all object types (and COW for value types).
@@CapnSlipp Well at least in Freepascal and in Delphi it's COW + ARC, yet you can also ALLOC all those data structures your self and they will then need to be freed by you and they will no longer show COW behaviour then. This is usefull if you want to use Records and Recordreferences to access protocol headers or file headers in Bytestreams ... just like in C.
In swift all collections are basically classes that are wrapped in a struct ! When you create a copy of a collection, swift just gives you a reference back to the original object! as long as you don’t try to mutate that object it will remain the same ! The second you mutate your copy it will create a fresh, stand alone copy of your collection and lets you mutate that one instead
Swift collection types (array, dictionary, set, string, etc) are copy on write, but this is not built into the language itself; they are implemented in the standard library that way. In essence, they are reference types under the hood, but before any mutation, the reference count is checked, and if there is more than one reference, it makes a new heap allocated copy before performing the mutation.
Does that mean that if I were to create my own type, lets say a Player model, it won't be a copy on write by default?
A typeclass in Haskell is not the same as a class. It's closer to Rust traits; a family of types for which certain functions are defined. The closest thing to a class in Haskell would be a Record (essentially a struct with godawful syntax).
Almost got it 😄 Traits are type classes, but somewhat limited. IMO it goes something like this: Rust's trait < Type class < C++ concept
@@m4tt_314 i haven't really played with c++ concepts. i used it once when making my own shitty std::map, but that was just using a built in partial ordering concept i think. i think i'll look into them now.
Swift is the most ergonomic safe language in existance
Swift is for gays, apple lovers
10:53 _”I'm curious how you do like shared references, ones where 2 functions can mutate a value without copy on write.“_ A couple of options:
1. Pass by reference (&-prefix) to a function with an inout arg (like C# and many other languages).
2. Much more commonly, wrap your value type instance in an object type (a class, as opposed to a value primitive type or a struct or enum, like C++ and many other languages).
3. Use closures to get/set the original value. A bit esoteric, but sometimes this is the mechanism under custom type-erasing types or custom property wrappers attributes (again, drawing some similarities to C#).
8:29 Swift makes a distinction between value types; struct and reference type; class. Basically you can do both heap and that other thing. Borrowing approaching recent releases of the language.
One thing I like about swift is how the language can constantly offer features to you that you need, but not flooding them all to you on day 3 of learning. On the other hand, if you suddenly need some advanced feature, it is difficult for you to know what are the prerequisites because it just suddenly feels like a foreign language
Nailed it
i remember starting out with playgrounds many years ago, when I had first started coding... I thought I got it! and then recently I started using SwiftUI 😵💫. Suffice to say i did not get it
It is a productive language and has been built from the ground up for that purpose.
so true
ResultBuilder DSLs and macros lmao
skill issue
When you start searching for Swift, both the content creators and the tooling seemed to be focused on publishing to the App Store. There is movement on the server side, but it doesn’t bubble up to where people can feel it.
You will find the most productive software engineering languages are platform specific be it C#, Kotlin or Swift. Try using a non specific platform language for software development and you will defund them to not be the best choice for software engineering.
3:00, in low-level, you can chose any way you want to pass. The only barrier is that, when passing by copy and wanting to edit the thing, having the changes valid for the original, a 2nd step must be done later: to copy the copy back to the original.
12:05, just like C++: full control of who changes who and who can be changed.
To complicate things further, Python does neither pass by value nor pass by reference, but pass by assignment. That entails behavior like this:
a called function can mute mutable objects (like lists) of the caller by using object functions (like .append or slicing), but not by making a new assignment to the parameter:
```python
def alter_list(mylist: list) -> None:
print(f"got list: {mylist}")
mylist.append("d")
mylist[0] = 42
mylist = ["x", "y", "z"]
print(f"muted list: {mylist}")
mylist = ["a", "b", "c"]
print(f"list before passing: {mylist}")
alter_list(mylist)
print(f"list after passing: {mylist}")
'''output:
list before passing: ["a", "b", "c"]
got list: ["a", "b", "c"]
muted list: ["x", "y", "z"]
list after passing: [42, "b", "c", "d"]
```
but immutable objects like strings will not change for the caller:
```python
def alter_string(string: str) -> None:
print(f"got string: {string}")
string += " was concatenated"
print(f"muted string: {string}")
string = "this string"
print(f"string before passing: {string}")
alter_string(string)
print(f"string after passing: {string}")
'''output:
string before passing: this string
got string: this string
muted string: this string was concatenated
string after passing: this string
'''
```
In deep learning context tensors are n dimensional arrays BUT what separates them (and what the degen twitch chat did not mention) from lets say vanilla Numpy arrays, if we are in python land, is that tensors have the additional machinery to be part of a computational graph and gradient computations in the graph using automatic differentiation that all deep learning libraries usually implement. For example in Pytorch all tensors have requires_grad attribute to imply if gradients should be calculated for said tensor.
Not necessarily true; e.g. the xtensor library in C++ is essentially trying to emulate numpy. AFAIK it has no autograd.
The _real_ difference between n-dimensional arrays and tensors is analogous to the difference between an array with 2 elements and a vector in the 2d plane. An array with 2 elements is just a pair of arbitrary numbers, but a vector must transform in a very specific way when you do coordinate transformations. For example, if you rotate the coordinate system counter-clockwise by 45 degrees, the vector at (sqrt(2), sqrt(2)) will be transformed into (2, 0).
neural net practitioners decided to call arbitrary n-dimensional arrays 'tensors' because they don't understand the above. I don't really care that much, call it what you like, but if you look for an intentional difference between 'tensor' and 'nd-array' in the machine learning world you're likely to end up confused.
@@isodoubIet Ok, sorry to you good sir for being too absolute in my wordings. I tried to give a practical explanation that relates specifically to the topic of this video and deep learning frameworks in general and not how they are viewed in differential geometry etc. since the connection is pretty loose I would say. The _real_ difference between n-dimensional arrays and tensors is obviously a skill issue.
@@burarum1 That's what I'm trying to say, there is no difference in the machine learning world. People just give these names and there isn't much rhyme or reason to it.
primeagean: ... right now swift has everything what I want ...
me: hmmm, where did I hear this before?
Granted I last took intro to CS 19 years ago... but do JS devs really not know when a value vs a reference is passed? I have a hard time believing it's even possible to program without keeping track of your function calls.
Honest question: does the language even allow values to be passed? Can the value be a pointer?
For your question, no you cannot implicitly pass by value that can be done by optimization but not by you. As for pointers they are implicit for any non-primitive data, not really something you control. Maybe if you count the typed arrays and buffers JS offers?
Javascript is pass-by-value. Objects are weird in that they're passed by value, by reference; you can reassign an object argument within in a function and modify that variable without altering the original object. But you can also modify parameters on the argument object directly and it will affect the original one.
@@fenndev that’s not how pass by value works lmao. Everything is passed by reference. The variable re-assignment you are talking about is reassigning the variable reference
Swift has UTF-8 checks for identical characters, I like that
swift imo has god tier string/character handling
technically, and this is where the confusion comes from, "pass by copy" is most often called pass by value, for example JS is a pass by value language, but the difference is in what that value is, where in JS that value is what the language treats its data types as: primitive types are copied when passed by value, while non primitive types are treated as pointers that are automatically dereferenced when used, i.e. you don't need to write * before an object when accessing values inside of it, like with C++, so what gets copied is that pointer, so both outside the function and inside of it, you have two different variables that point to the same object.
languages like C# are also like that but it also provides you with modifiers that can change how things are passed into functions, like the ref keyword which end up creating a ref (explicit pointer) to the variable passed in, which can be a primitive type or not, and in the case of a non-primitive type you get a pointer to technically another pointer, instead of a pointer to the same object, meaning it's something like temporarily moving the variable into the function while it's executing.
It’s always real fun when data scientists create cool bugs when dealing with pandas dataframes, passing them around like everything is a copy. I was one of those data scientists once. Luckily I am naturally curious and also once dabbled a bit in C after which everything got a bit easier. But the minimum is to once read a blog entry on shallow and deep copies in python.
I recently wrote some numerical code and I tried to understand why it didn't converge. So I added some statements to compute some helpful intermediate values and print them. Doing so crashed the code.
That's when I remembered Python is a garbage (collected) language and that basically everything is passed by value. Yeah, wasting a few hours on that wasn't fun.
Knowing C++ definitely help debug that issue but it was still a pain. I'll never understand how people that decided that pointers were too complicated also decided that the best way to get rid of them was to make nearly all variables pointers in disguise (with even the ability to be set to a null pointer)
3:40 Wait what? As computer scientist i'm confused by that statement, why is CS dying? In Italy the people who start a cs degree are greater and greater each year, my courses didn't even had enough space for all the new students, and they are hired sooo much faster than the one who don't have a cs degree, i strongly disagree with that, and if someone's a programmer and he doesn't know what passing by value vs passing by reference means, he's taking the wrong choice of life.
And btw, yes, in CS courses they teach what passing by reference and by value means, because we both study the LOW level and the HIGH level of programming languages, the full stack, and we all say whenever a language passes by reference and by value and in which cases it does that.
Python also uses a thing called caching, also java uses that, basically whenever you say variable = 5, it keeps a space in memory containing the number 5, whenever a new variable called variable2 = 5 it's declared, python uses the previously space in memory to avoid instantiating a new space in memory and keeps track of the variables that reference this space and how many variables do that, yeah just like rust pointers. Java does that for numbers lower than 256 (don't remember the exact number).
Whenever we say that Java always pass by value it's true, but you basically "pass by value" the address that it's inside the reference, they're using pointers without knowning it. I'll try to explain better: Let's say i have a class Home, and an object Home home = new Home();, whenever i pass the variable (reference) home to a function, i pass the address of home as value. If people start studying C and what a pointer is, they would actually understand all of that pretty easly, a cs student has hard time understanding that concept right there, simply put, there are cases in wich you can do both thing, value or reference, and cases in which the language does all of that under the hood for you. The language that allows to fully understand the whole passing by reference and by object in my opinion is C.
Also we need to explain something about Python, everything in Python is an object, that does not happen with Java, primitives in Java are not Objects, and primitives in Java are passed by value in the pure sense of the term, meanwhile references to Objects in Java are passed by value, not the value of the Object itself, so also Java has the real pass by value thing, with primitives, and I still think that a programmer *should* know that
@@luca4479 In Italy we have 2 different degrees, Computer Science and Computer Engineering, the first one teaches code, with just ONE exam about Hardware, and 5-6 exams about high level programming (ALL of Java, the whole thing), and low level programming (UNIX: Posix vs System V, C, etc.), the second one, engineering, it's basically the opposite, low coding, a lot of engineering, saying i can understand how it works just because i know how it works behind the scene doesn't make sense to me, cause you can understand how to use the language without having to know how it works behind the scene, i'm still confused. An Engineer would know so much more about the low levels (plural) of the Hardware, i know what i have to know to understand how the language works, but i don't think neither CS students and programmers should be confused by the whole passing by reference vs value thing.
How long should I wait? And why do you feel the need to tell people to wait while you work something out? Main character syndrome.
@@s3rit661 I think his more general, perhaps unstated, point is that CS is no longer THE way to get into coding as a career (at least not in the US). So a lot of people are getting into it without ever having gotten that fundamental knowledge about low-level details. So they're like "pointer? huh?" and their understanding of reference vs. value semantics is largely limited to "oh, I must remember that if I update an element in this array, it will change it everywhere." They don't necessarily know why that behavior happens, or any sort of general rule about it, but maybe just think it's a special case.
@@kb_dev What that does even mean? When did i ask you to wait for me to work on something? Saying "Wait what" it's a matter of say, it's like saying, "wait a minute", it's like saying "i don't agree with that", i'm not english native, so i always thought that was the meaning of it, at least from where i heard those words
I've interacted with lots of CS students both domestic (US) and international in my current MS CS program. There are lots who had departments which taught them everything in a single language and they never learned anything from outside the paradigms of that language. Often Java or Python depending on the department. So they're generally very good at that one lang they have experience with and OK with theory, but have absolutely no experience translating their theory understanding to a different language or paradigm.
IME these people often come out of large schools whose faculty aren't teaching-focused and want to enforce a very specific knowledge-path through their department.
I can definitely imagine many of this type of student having no idea how anything low-level works beyond possibly a vague definition. You then also have people developing software out of bootcamps who likewise only know how to do one specific thing; the "React Andy"s that Prime often mentions.
CS as a field definitely isn't dying, but there's often a significant knowledge gap between inexperienced and more experienced developers and especially between theorists and software engineers, with many institutions confused about where they want to fall along those axes.
Java's new Vector APIs will now create SIMD (AVX2, AVX512) instructions under the covers (if available) with huge performance gains for graphics and large scale computations. There are only a few compilers that have support (GCC and VStudio does though).
Java will reuse strings via the String Pool. The JIT will only reference them once.
Groan (new Java):
void main() {
System.out.println("Hello World");
}
These people need to keep up!
Go is pass by value although the "value" when you are handling an array is a pointer, so you pass a value of a pointer when you pass an array in Go
Swift has the nicest closure syntax of any language I've used. Especially how it gives you implicitly defined parameters like $0, and $1, etc. Makes it really compact to just map to some property of the objects in the list.
Unfortunately, it also has the worst syntax for passing a closure to a function.
@@isodoubIetwhy tho?
You either assign it to a variable and pass like a regular argument or you can use a shorthand syntax which is far better than any other language.
@@mamai_eth The shorthand syntax is precisely what I'm complaining about. It's a function, not a command, let's not mix the two.
@@isodoubIetswift has literally built a UI framework, regex framework, I have used a parser framework and a html/template frameworks based on function builders which are based on shorthand syntax.
Like if you don’t wanna use it, nobody forces you to.
Btw what a heck is a command in programming? Never heard such a concept.
Is there a language which has commands?
@@mamai_eth That makes zero sense. A shortened syntax is not responsible (and not even important) for any of these things.
"Like if you don’t wanna use it, nobody forces you to."
Exactly, I will never use swift partly for this reason.
If you meant "don't use the construct" instead of "don't use the language altogether", you should know better.
"Btw what a heck is a command in programming? Never heard such a concept.
Is there a language which has commands?"
It's the same thing as a statement. Don't be silly.
Swift is nice, the unfortunate part is all the legacy Objective-C code you need to interface with when developing for iOS. At least that's how it was when I tried it years ago.
Still happens, but not as much anymore. A lot of the APIs are re-written in Swift (Foundation, Networking, parts of UIKit etc.) while others are still Objective-C (Cocoa, CoreGraphics, parts of AppKit, etc.). On top of that Apple also has some open source projects on GitHub that further provides rich Swift APIs.
I didn't realize the danger of passing an object as an argument is actually by passing the reference value (address/pointer) and any changes will mutate to the root of the object until 2 weeks ago at my college was talking about this in detail during the Java lecture.
Shamefully enough I was working as a JS dev full time and I didn't know that until now
It's not even your fault, it's the fault of the tutorials that didn't teach that, or yours to have taken the wrong one, but at the end how could you knew which one was the correct one
It's called shallow copy (passed by ref) and deep copy (actual clone).
In JS mostly you have shallow copy unless you convert object to JSON and then parse it to another var. Or use structuredClone().
@@s3rit661 I did a bootcamp and they never go deep on that topic.
People say colleges are useless, but I relearn lots of fundamental knowledge from college, and it is just 3 months in! The knowledge is useless until you reach beyond the surface
Yeah that's one of the pitfalls of starting with a high-level language. They all try to shield you from having to think about it, but you always end up having to think about it.
I love how prime discusses with his chat what Swift does even though no one there knows what is going on, and to finish it the guy in the video doesn’t do the greatest job at explaining what *type semantics* is even supposed to mean
The guy in the video is the inventor of the Swift language, the LLVM compiler, and half of Rust, Chris Lattner.
All the people saying that its just a copy of rust ... yeah except swift is older than rust ... so I guess we know which is the copycat.
Need more videos like this of prime reacting to Lex interviews 🔥🔥🙌🙌
16:22 This is in Swift. extending an Int or a Sequence do what you are asking for.
Here's a very basic, oversimplified explanation of what a tensor for programmers without all the "formal mathy" stuff:
You can think about a Tensor as a multi dimensional array, and the "Degree" of the tensor, is the amount of indices the array has, NOT the amount of items in the array. Here are a few examples:
- Tensor with a Degree of 0, is just a single number (no indices).
- Tensor with a Degree of 1, is a basic array, since it has 1 index, notice that it does not mention how many items are in the array, only that it's a 1 dimensional array.
- Tensor with a Degree of 2 is an array with two indices, which is basically a matrix, again, it does not mention how many rows or columns, just that it has 2 indices.
- Tensor with a Degree of 3, is a 3d array, so 3 indices.
And so on....
And there are rules on how to perform mathematical operation between tensors with the same/different degrees.
So it allows you for example to "multiply" a 3d array with a 2d one, or a 1d with 2d, and so on, in a way that makes mathematical sense and is consistent.
Again, there is A LOT more to Tensors, and this is super simplifying the topic, but this is pretty much all you need to know for computer science.
P.S. Tensors were mostly just a "pure math" thing, until Einstein used tensor analysis for EVERYTHING in General Relativity which popularized it made it a vital tool for any physicist since then.
CppCast had Dave Abrahams as a guest after he had worked with creating Swift, when he moved to Adobe tech lab, and he talked a bit about that stuff - getting defaults right etc. He pretty much started the talk about the importance of const things in C++ world I suppose. It was pretty fascinating.
Would love to see more Swift content. It’s one of my favorite languages ❤
In case anyone wants to know the details of Swift's CoW, here's what I know:
Generally speaking, for any value type (any structs or enums), if you make an immutable copy of it, (like if you pass it as a param to a function, or do `let arr2 = arr1`), the compiler passes a reference to that instance. However, if you make a mutable copy, (`var arr3 = arr1`), the compiler makes a copy of it and send it. You can neither modify the value of an immutable value-type property nor mutate any of its members.
Swift's collections are value types, so they're passed by value, but they have copy-on-write semantics, and that's handled by the implementation.
So, if you make an immutable copy of an array, then you're sharing the reference to the instance itself. If you make a mutable copy, then a copy is made, which is, in reality, just the reference to the underlying storage. When you get to actually mutate the values within the array, a copy is made of the underlying storage, and then the copy gets the changes.
3:57 yeah to be fair I have a bachelors in software engineering and we didn’t cover “under the hood” stuff
When prime learns Mojo's language features... Imagine this same dude learnt all of the lessons fron building LLVM, Swift, Tensorflow XLA "generalization" (which became MLIR as far as I understand), and then went and built a programming language that mixes Rust, C++ , and python concespts with a pragmatic take and letting the programmer decide how to use the features
Eh, mojo doesn't look that impressive
@@arjix8738
How would you know at this stage?
@@Art-is-craft it has been as disappointing as bun for me
Both incomplete and a pile of promises
Swift distinguishes between `struct` which is pass-by-value and `class` which is pass-by-reference.
I think Swift would've been a lot more popular if it was not limited to MacOS for so long.
Swift 1.0 came out on September 9, 2014 for macOS, and Swift 2.2 released with Linux support on March 21, 2016. Native Windows support (as opposed to Cygwin or WSL) came along in Swift 5.3 on September 16, 2020.
So (unless you're talking about native Windows support), 1 1/2 years from initial release for a language now 9 years old isn't very long.
@@CapnSlipp it's not very well known that it's cross platform. no big (non-mac) projects really use it or anything. also, like it or not, native windows support matters a lot
There are zero good cross platform languages in software engineering. You are either producing for windows, Apple, android or some other niche system. Apple is never going to be Windows or Android.
@@Art-is-craft what do you mean? Every major language except low-level languages and Swift are cross platform.
@@ivanjermakov
A VW Beetle can actually drive around a track but does not mean it can compete in Formula One. Just because a language is capable of some type of of systems usage does not mean it is suitable. Python is a fantastic language but it rarely gets used to build application software. I cannot see Swift getting used on windows or android for development.
A tensor is a multilinear map between the product of dual of a vector space V with itself p times and the vector space q times to the field of the vector space V. By the way
what
@@RawFish2DChannel Just some guy that feels the need to show everybody how smart he is. Don't worry about it.
@@kb_dev insecure people always think others are just showing off their intelligence
that might be case in some physics or other field. in data science, tensor is just a n-dimensional array, usually n>2. for n=1, it’s normally called vector and n=2, it’s normally called matrix.
Well I don't buy your philosophy. This is the definition of tensor, someone can think of them as "fancy n-dimensional tables" but he doesn't get the whole picture.
Of course it's completely fine if a programmer doesn't know the general definition. But in my opinion if you know it you can't say that they are a different thing if viewed by different people.
In fact why they call them tensors while they could call them "n-dimensional arrays"? Because they are an istance of a bigger structure.
Anyway mine was just a "by the way" joke, I don't see how many of you didn't get, it's a comment under a Primeagen video, the "I work at Netflix btw guy" hahah@@bbajr
Swift is easy mode programming, and that’s not a bad thing.
Couldn't be more wrong on that one.
Did Apple finally fix XCode or something?
Because back in 2020, when I was trying to get started with iOS development, working in XCode didn't feel like easy mode _at all._
The developer experience was closer to playing a Soulslike in Early Access.
I've been working in Xcode for over a decade, so I couldn't comment on it it's been "fixed". I've adapted to the way it works. Someone who's not using it all the time might feel differently.
But Xcode & Swift aren't the same thing. @@oleg4966
@@oleg4966Mobile dev since 2016, iOS only since 2019 here: Xcode has been nicer since 14.3.1, I’ve heard there have been some big improvements in Xcode 15 as well.
Although I found Xcode to be intimidating with sharp edges at first, once you learn how to navigate those (knowing when to clean your project vs delete derivedData, manually fixing pbxproj merge conflicts, learning to NEVER use playgrounds or Storyboards) it’s actually a really nice & thoroughly integrated IDE experience.
@@oleg4966You don't need Xcode for swift. It's cross platform and can be written in anything with an LSP.
You pass by value in case of the objects as well, the value is a pointer, so you copied the pointer and passed a copy of it. But as both the original and new pointer and pointing to the same memory address, you'll edit the exact same object inside the function which is in turn a call by reference.
Swift is awesome! I wish you could use it for more things.
Yeah! It's the language I didn't know I needed but still kinda can't have. "Easy mode Rust"
I love Swift too. While Apple actively encourages developers to use Swift for everything, Apple's main focus is adding new features that will benefit Apple platforms. They are always adding new features every year, but I would love for them to just take a break on adding new stuff and focus on bringing Swift on Linux and Windows up to parity with Mac. They are obviously pushing the C++ interop thing hard, but they do need to improve the tooling cross-platform, especially on Linux.
@@mistymu8154 totally. Although they needed many iterations to get the language to mature and to iron out the kinks. Now is a great time to make it cross platform properly.
@@mistymu8154
Why on earth would Apple want to help Linux or Windows. It would be like asking Mercedes to help GM build a better engine.
@@Art-is-craft Swift is open source. Also Apple is a consumer hardware company, they compete with neither commercial software companies or open source operating system collectives. They make their own devices and they make the things that run on those devices. That's it. They don't have to compete with anyone who makes software and to think they do is such a fundamental misunderstanding of technology I have to wonder what you're doing here at all.
4:30 me and a friend argued about argmax i kept telling him it's mode of a continuous distribution he kept saying it's the index of highest number in an array i was confused.
Yall need to hear 4:00 - 5:55. You're right 100% Prime.
Lattner is great. His three interviews with Lex are worth a listen.
Primitive datatypes like number, bool, strings etc are passed by value. Complex datatypes like arrays, objects etc are passed by reference.
What do you think of using R for plots?
Start, or end, position and length?
Can swift apps run on Windows 11 reliably yet?
I'm calling BS. I've met plenty of CS grads that don't understand the difference. I'm self taught and I learned all o this stuff on my own back in the 90's. It's not about BS degree, it's about the person. Some of the best programmers I know do not have a CS degree and didn't need spoon feeding.
Swift is a nice language. With the most recent version, it has nice support for serializing/deserializing JSON data. Back in Swift 1-2, only having dictionary support for JSON data kinda sucked. Swift definitely fixes a lot of ugly bits of Objective-C
All cs degrees nowadays face both low level and high level languages and uses C as a way to teach the concept of reference and value, i've seen the program of 4 different cs degrees here in italy of 4 different unis and all of them teach that
bro thinks we were spoon fed
Python pass by reference and mutability gave me trust issues in early University, particularly when copy and deepcopy often wouldn’t actually allocate a new list/array 😂
I still haven’t encountered a better solution for force allocating a new collection than doing copy_of_foo = foo * 1
copy_of_foo = foo.copy() haha
That sounds about right thank you haha
Swift also released no copyable primitives, and you can borrow the variables
So in swift a data type is either pass by value, or pass by reference, and never both. This is quite silly, as this means I have to encode the pass semantics of a datatype in the datatype itself. I much prefer encoding the lifetimes...
as a newbie this was super helpful! TY!
scalar-0d array, vector-1d array, matrix-2d array, tensor-3d array
Totally doesn't matter but I really like in Swift being able to write a list of lets or vars without re-writing the let/var. It's silly but I miss it when I switch languages. I think the only other language compiled language I could do this in was Nim?
I've enjoyed tooling around with Swift, but I'd rather code in vanilla Vi or even Nano than XCode.
tensors are WHAT NOW???
3:01 - Lex Fridman interviewing The Grinch
I mean isn’t all ML basically linear algebra computation (under the hood) to compute data der stats.
we called in LAAG in linear algebra and analytical geometry
Haskell slow? When did that happen. I thought it was marginally slower than c . Its a compiled language it's not like an interpretated language. I think it test it is slightly slower than c and like 30 to 40 times faster then the SLOW languages.
Lex pretending to know once again, or is it just the way he talks. 😂😂😂
Python is the same way for me, just use it when I need to process some data like once every 4 months.
When it comes to data Scheme is superior.
"I know all those words, but I don't know what Swift is trying to do" - Ah, the eternal problem of programming, we have so much jargon but it's so context dependent, that ofthen it becomes useless.
So what was the original video about? I didn't catch it among all the pausing. Seemed boring.
Full time dev on Swift for iOS. It is fine, nothing seems annoying or great about it.
It is designed to be productive.
Latner nowadays is in Mojo development, that would I learn next
Well done.
So much of Rust is present in Haskell. Almost everything I hear people praise at least.
I guess this is why I never commented any (javascript) code, because I had the nagging feeling that there was something like ownership out there which was so important and impossible to infer from code in a typical javascript project, that as somebody looking at a codebase for the first time, you want most of the comments to be about that.
Just kidding, tomorrow "code that has to be fast because while it runs it has locked this or that resource" is the next big thing to comment and I'll throw out all my ownership comments, better stick to not commenting at all and have copilot/chatgpt explain it (?)
Btw I kinda like that when using array.sort, that seems to hint at that sort will change the array, whereas array2=mysorter(array) should probably not change the array or otherwise it deserves a comment.
Lol, "this guy" like Chris Lattner isn't a modern day legend.
6:30 This is Elementary Linear Algebra; You can put a matrix inside a tensor.
Remember Swift is not by Apple, Swift is by Chris Lattner. Most the frameworks for the Swift language is by Apple though ...
Should do the next AOC in Swift.
Pass by reference does not copy. It's a pointer under the hood.
work or whitepapers. is this the first time Prime uses this?
Swift does seem VERY appealing. I'm interested to learn it better too
Arc and cow and structs are value semantics
So basically, he's describing part of the reasons Haskell is great
Isnt swift a programming language for apple app
yes but it can be used for everything
"Pass by copy" that ISN'T a synonym for "pass by value" is quite possibly the most useless CS term I have ever heard. A CPU literally ONLY copies, the "mov" instruction *_is_* a copy.
I thought in this day and age CS basics to a certain degree would still be common knowledge even if you didn't study in traditional post-secondary institutions. I wonder if the bootcamps are weeding out the a lot of the important stuff to keep the attention span of the new learners. It then comes back later like a slapshot to the shins.
You are assuming that it is taught in traditional post-secondary institutions. It is not. Not reliably anyway, and certainly not in the 100 or 200 level classes. I was in my discrete structures class (probably the most useful class if taught well), with some comp-sci students. They were on their 4th semester of comp-sci and struggling to understand why a C++ function took out parameters as pointers. I had noticed the students finishing 2 semesters of comp-sci didn't have a clue how to actually program, so I'd actually signed up for the CS101 at the same time as discrete structures.
At this point, if you are going to university to learn to program, my general recommendation is to get a math degree. You can learn everything the CS-half would teach you in a long weekend once you know the math, but the inverse is not true.
Legit my first programming class in my freshman year was intro to programming with C, and one of the first things we did was create functions that used pass by value and pass by reference, so there is definitely institutions that teach this stuff. Perhaps they could be doing more but this is extremely basic and common knowledge
I like my pass by references
I'm trying to think when i last needed to copy.copy() or copy.deepcopy() in python 😂 what a non-issue.
Feel the same way about python.
n Swift, Array, String, and Dictionary are all value types .... da faq just read
This is great and all, but are we not going to mention that PHP is great? Like guys, you can write PHP and it will instantly work on your web server!! OH SHUCKS. It gets me excited just talking about it
Thought - i saw you that one time, yeah - in that Miura, right?
I will admit to not understanding Swift after using it for a week, but I still hate it. I hate xCode; it's beyond slow and has no features. Java IDEs, either Eclipse or IntelliJ, destroys xCode. Wrapping and unwrapping as a core of everything? Not knowing the type of variable as a feature? Not passing by reference or value? The syntax is beyond ugly, but I am still stuck on, let's first unwrap this and null check it all at the same time. Error handling? you think ? ? ? ? ? is error handling? Let, vs var, why? Class vs Struct, why? renaming everything, it's not an interface its a protocol? It's not an attribute it's a property wrapper? What is this obsession with WRAPPING things? Generally WRAPPING objects is ANTI PATTERN but in Swift it's a core feature?? The F? The entire thing feel juvenile and wrong. Java sucks because you have to type "public static void main String args" ONE TIME for every application???? ONE TIME?? That's the problem with Java? Really? Then proceed to create the worst syntax of all time? " var body: some View { Text("Hello, SwiftUI!") } -> " Are you kidding me? "some" view? SOME as a type? GTFOH Swift. Apple, Just use Java for Christ's sake, or Dart, or JS, but not C#, that shit is for bitches.
Pinescript baby
as Swift and Python programmer, I slowly tend to to use conventions that are more common in Swift, one of them being that I try to pass value types to functions (even use `copy.deepcopy`). I imagine this would help much if I need to work with Python without the GIL
@ThePrimeTime you should definitely learn swift!
.... so basically Swift works as C# ;)
swift really is a great language. i wish to bring it to my system but the environment 😭
Everything points me to Go, the dumbest, middest, but best designed language ever.
Swift has structs with methods!
C# rules them all
So this is a 3-year-old video on how awesome Swift is for AI, yet most of its AI toolings till now are only for Apple. Yuck or am I wrong?
There's been Swift for TensorFlow, where they implemented differentiable programming right into Swift. They later split into two companies, one of them is PassiveLogic and Modular (where Chris Lattner, the guy in the video, is currently). Watch PassiveLogic's launch event, especially the part about Differentiable Swift: ruclips.net/video/xr8sV3GdhVk/видео.html
@@DjStanislav Thanks I'll check it thoroughly later
Noone knows Rust, why would they reference it?
Swift is Apple sheep language.
Absolute braindead take.
You say your audience is JS devs, then ask them a Go question, they answer, and you accept it as the right answer. lol
Easy Breezy.
I think we need some F# or Clojure in your life bud.