- Видео 16
- Просмотров 14 589
MPAGS - High Performance Computing in Julia
Добавлен 18 янв 2023
06. Assignment 1 (Tutorial) [HPC in Julia]
In this video, I will show you how to follow along with the assignments for this course.
For those of you on the MPAGS course, please use the GitHub classroom links sent via email.
This module was designed as an MPAGS (Midlands Physics Alliance Graduate School) module and aimed at postgraduates and early career researchers.
Assignment Links:
1) Getting Started in Julia - github.com/MPAGS-HPC-in-Julia/assignment-1-getting-started
Timestamps:
00:00 Introduction
00:09 GitHub Classroom (MPAGS only)
00:40 Non-MPAGS
01:30 Cloning your repository
02:32 Initial setup
05:26 Adding Revise
06:19 Implementation
07:42 Running tests (VS Code)
09:01 Submitting your implementation
11:08 Running tests (REPL)
Useful link...
For those of you on the MPAGS course, please use the GitHub classroom links sent via email.
This module was designed as an MPAGS (Midlands Physics Alliance Graduate School) module and aimed at postgraduates and early career researchers.
Assignment Links:
1) Getting Started in Julia - github.com/MPAGS-HPC-in-Julia/assignment-1-getting-started
Timestamps:
00:00 Introduction
00:09 GitHub Classroom (MPAGS only)
00:40 Non-MPAGS
01:30 Cloning your repository
02:32 Initial setup
05:26 Adding Revise
06:19 Implementation
07:42 Running tests (VS Code)
09:01 Submitting your implementation
11:08 Running tests (REPL)
Useful link...
Просмотров: 57
Видео
05. Julia Overview [HPC in Julia]
Просмотров 16719 часов назад
In this video, I will show you how to install the software required to follow along with this course, as well as a casual introduction to the language. This video is not designed to be a programming tutorial, but instead, we will focus on the specific syntax and paradigms that Julia uses. By the end of the video, you should be more comfortable reading Julia code and understanding how Julia oper...
04. Compilers and Interpreters [HPC in Julia]
Просмотров 890День назад
In this video we introduce the concept of compilers and interpreters and explore how these differing approaches affect runtime performance. In particular, we explore the Julia compilation pipeline. Timestamps: 00:00 Introduction 01:27 Compilers (C) 04:29 Intel Assembly 08:30 Interpreters (Python) 12:26 "Vectorised" code 13:28 JIT Compilation 14:11 Julia "Just Ahead of Time" 16:21 Julia Compilat...
03. von Neumann Architecture and Data Types [HPC in Julia]
Просмотров 31614 дней назад
In this video, we begin discussing some fundamental ideas in computing - starting with the von Neumann architecture and finishing with a discussion of data types. Timestamps: 00:00 Introduction 01:10 Online notes 01:18 von Neumann architecture 09:03 Integers 16:33 Booleans 18:25 Chars 19:56 Floats 25:10 Outro This module was designed as an MPAGS (Midlands Physics Alliance Graduate School) modul...
02. Assessment Information [HPC in Julia]
Просмотров 9514 дней назад
In this video I will describe how this module is assessed. We take a look at the first assignment for this course, along with some useful software for interacting with GitHub. Timestamps: 00:00 GitHub Classroom 01:08 Repository 01:54 Unit tests 02:40 Assessment templates 03:42 Getting code on your machine 06:10 Visual Studio Code 06:30 Syncing changes with GitHub 07:18 Assessment criteria This ...
01. Introduction [HPC in Julia]
Просмотров 26814 дней назад
This is the first video in the High Performance Computing in Julia series. In this video we will introduce the syllabus for this module, along with giving some motivation behind why you should be interested in taking part. Timestamps: 00:00 Introduction 01:03 Schedule MPAGS 01:43 Syllabus 03:20 Motivation 06:02 Julia 07:50 End This module was designed as an MPAGS (Midlands Physics Alliance Grad...
CUDA.jl Kernel Programming (HPC in Julia 10/10)
Просмотров 1,1 тыс.Год назад
MPAGS: High Performance Computing in Julia In this lecture we take a deeper dive into GPU programming via custom kernels. We focus on the CUDA programming paradigm, but lessons learnt should transfer to other GPU kernel programming paradigms. We discuss the partitioning method in CUDA, by splitting into grids, blocks and threads. We write some basic kernels in CUDA.jl, along with a more advance...
Introduction to GPU Programming & CUDA (HPC in Julia 9/10)
Просмотров 2,3 тыс.Год назад
MPAGS: High Performance Computing in Julia In this lecture, we talk about the concept of GPU programming, including the differences between GPU and CPU hardware. We discuss some models of how to compute on a GPU, with particular focus on CUDA and the CUDA.jl library. We cover some examples of the high-level array based programming mechanism provided by CUDA.jl to avoid the need to write one's o...
Research Software Engineering (HPC in Julia 8/10)
Просмотров 412Год назад
MPAGS: High Performance Computing in Julia In this lecture we briefly discuss some helpful professional skills when writing code for research. We touch on the basics of version control, good documentation & design practices and making your code reproducible. We also talk about Open Source development and ways to contribute to existing open source projects. This is module designed for the Midlan...
Multiprocessing & Cluster Computing (HPC in Julia 7/10)
Просмотров 1,1 тыс.Год назад
MPAGS: High Performance Computing in Julia The multiprocessing parallel programming paradigm allows us to utilise multiple computers to work on a problem at once, scaling beyond the capabilities of multithreading, but at the cost of higher abstraction and latency. In this lecture, we talk about the traditional MPI approach, but focus most of our energy talking about Julia's Distributed.jl frame...
Multithreading (HPC in Julia 6/10)
Просмотров 1,1 тыс.Год назад
MPAGS: High Performance Computing in Julia This lecture covers a few important topics important for correct and efficient multithreading parallel implementations. This is module designed for the Midlands Physics Alliance Graduate School (MPAGS). More information can be found on the website.
Introduction to Parallel Programming (HPC in Julia 5/10)
Просмотров 7152 года назад
MPAGS: High Performance Computing in Julia In this lecture we talk about trends in microprocessors and why we actually need to think about parallel programming. We give some theoretical understand of what can and cannot be easily parallelised and the limits thereof. We touch on a simple Monte Carlo map-reduce algorithm for computing pi. This example will be finished in session 6. This is module...
Optimisation & Type Stability (HPC in Julia 4/10)
Просмотров 9862 года назад
MPAGS: High Performance Computing in Julia This lecture continues the trend of talking about optimisation. We walk through some code examples directly in the REPL. We also cover some basics of type safety and type stability. This is module designed for the Midlands Physics Alliance Graduate School (MPAGS). More information can be found on the website.
Measuring Performance & Optimisation (HPC in Julia 3/10)
Просмотров 1,5 тыс.2 года назад
MPAGS: High Performance Computing in Julia This lecture covers the basics of measuring the performance of your code via benchmarking and profiling. We also cover some basic optimisation techniques by reducing heap allocations and using cache friendly implementations. This is module designed for the Midlands Physics Alliance Graduate School (MPAGS). More information can be found on the website.
SIMD & The Stack and the Heap (HPC in Julia 2/10)
Просмотров 1,2 тыс.2 года назад
MPAGS: High Performance Computing in Julia This mini-lecture introduces the idea of hardware level SIMD on the CPU. Additionally, we discuss the concept of stack and heap memory for a process and dive into an example of how the a stack can be used to efficiently evaluate an expression. This is module designed for the Midlands Physics Alliance Graduate School (MPAGS). More information can be fou...
Hardware & Software Basics (HPC in Julia 1/10)
Просмотров 2,5 тыс.2 года назад
Hardware & Software Basics (HPC in Julia 1/10)
Thanks for this introduction!~
Very nice introduction! Loving it!
thanks a lot and looking forward to next video! the speedup used to depend a lot on code style and language constructs used. some were very fast, others not at all. and unfortunately more elegant solutions were slow :( so slow in fact, that adding jit on top of it made it not really fun to use. actual on-disk caching of these jit outputs would make a great difference. how long does it take to load some plotting libraries today? this used to take like 50 seconds. so imagine writing your code in vim and re-running the scripts all the time with 50+seconds of startup jit, plus your actual code runtime. though times :D so i hope this got faster and that some day i'll learn what i did wrong when trying to write some nice and high performance code. is it a year of julia already? lets seeee
Thanks for your comment! As for Julia, a lot of progress has been made on "TTFX" (Time to first X - i.e. first plot takes a while due to JIT). I haven't measured it recently but it feels like it's less than 10-20 seconds now and it's only paid once. This is better due to better package precompilation. Adding new packages still causes a lot of precompilation, but this usually happens rarely. If you are using a REPL based workflow (instead of running scripts) then you usually pay this only once/twice per day so it doesn't feel that bad at all. In the next few videos, I'll talk about an optimal workflow to try and minimise the precompilation cost so TTFX doesn't feel like too much of an issue.
It's a slippery slope to compare language performance between different languages on a specific test because it's very easy to make a false conclusion. Doubly so if you consult ChatGPT to explain assembly when it has a decent chance to hallucinate something up. And I would expect if you take the time to rigorously make sure that the languages are optimizing to the best of their ability AND... AND that they're doing EQUAL amount of work that the performance would be marginally similar as both languages drive towards the ideal set of assembly instructions that are most efficient. Also... I've been PRIMED to be suspicious of language comparisons and I keep this in my noggin just in CASE that I come across a fancy graph: (im sorry) ruclips.net/video/RrHGX1wwSYM/видео.html ruclips.net/video/uVVhwALd0o4/видео.html
Definitely agreed that these "micro benchmarks" are limited in what they can teach us. Correctly (and fairly) benchmarking is difficult, and next week I'll be releasing a video on that topic. In this example, the main take away is the difference between interpreted Python and compiled code, which are orders of magnitude different. You're right to be skeptical about performance comparisons - statements like language X is faster than language Y aren't very useful without looking at a specific example and trying to explain why differences arise.
commenting before watching the video: i suspect all of this "speedup" is bullshit, i've dealt with the language and it was *very far* from what was "advertised". it was very slow and borderline unusable. i'll watch the video and comment afterwards if i am convinced otherwise or not. i truely hope it got un-hyped, demystified and got actual performance. lets go!
Many people get a bad impression of Julia due to the JIT compilation - but many issues have improved (and are still improving) with latency from JIT. A lot of the issue can also be avoided by using a good workflow or at least mitigated.
Great Video: Keep up the good work!
Here's something to consider: At around 24:40 into the video when you stated that those who have a background in mathematics that they wouldn't like the idea of 1/0 being infinity. This all comes from the years of people being taught that division by 0 is undefined. I have a very good background in Math, Physics, and other fields of the sciences. Yet, I will argue that division by 0 is NOT undefined. I would claim that may be ambiguous, but not undefined. We are taught that division by 0 is an error, that it cannot be done. I beg to differ. I have experience in C/C++, a little bit of C#, Python and some Assembly, I have no experience within Julia, yet I think the way that Julia is treating 1/0 as infinity is quite accurate. Why? Division is defined as the inverse of multiplication. Multiplication is defined as repeated addition. This implicitly implies that Division isn't just the inverse of multiplication but is also the same as repeated Subtraction. This is one of the key points towards understanding this. Subtraction is defined as either the inverse of Addition or it's simply Addition with the multiplication of the additive inverse. For example: A - B = A + (-B) which is the same as: A + (-1 * B). Here we have an implicit multiplication of (-1). And again, multiplication is repeated addition. Thus, Subtraction is also Repeated Addition in some sense. The only difference (pun intended here) is that the repeated addition in the case of subtraction only occurs Once. This is because multiplication by 1 or (-1) is a single operation or transformation. The +/- determines the direction or orientation. Consider these two expressions or equations. (1 * -1) AND (1-2). Here we have an expression based on the multiplication operator and we have one that is based on the subtraction operator. Yet these two expressions are Equivalent classes as they yield the same result. The transformations being applied might be different, but the destination point is the same. How does this constitute division by 0 not being undefined? We need to also consider some of the basic properties of Arithmetic such as the Additive Identity and the Commutative properties. Additive Identity: 1 + 0 = 1 0 + 1 = 1 1 - 0 = 1 These three equations satisfy the additive identity. The only exception is the expression 0-1 = -1. This does not directly satisfy the Additive Identity because 1 != (-1). Yet it does satisfy the Additive Inverse Property: A * (-1) = (-A) OR (-A) * 1 = (-A). We'll come back to this in a moment as we need to show how this relates to and preserves the Commutative property of A+B = B+A For Addition: 1 + 0 == 0 + 1 is True For Subtraction: 1 - 0 == 0 - 1 is False What's going on here? When evaluating simple arithmetic expressions, we are typically taught to only be concerned with the Output or Result of the operation. We are blindly or ignorantly taught to ignore what happens or takes place when we attempt or try to apply a given operation. This is one of the points within mathematics that is overlooked. With these four base cases I will show a table to demonstrate the importance of what we tend to overlook or bypass. This table will show the basic Additive Identities of both Addition and Subtraction, but what we need to ask here is if the operation being applied is Generative or Not. What does it mean to be Generative? Within the binary operators of both addition and subtraction, we have two operands, a LHS (Left Hand Side) and a RHS (Right Hand Side) respectively. We typically read or evaluate expression from Left to Right by generally accepted conventions. Within the context of this accepted convention and framework, we can easily ask: When applying said operator between LHS and RHS, the question is Does RHS transform or change LHS? If it does, this expression is Generative, if not, it is Non-Generative analogous to a No Op within computer science meaning that a transformation or translation did not occur. Here's the basic table: (1+0) = 1 Non Generative : 0 does not change 1 via addition (0+1) = 1 Generative : 1 does change 0 via addition (1-0) = 1 Non Generative : 0 does not change 1 via subtraction (0-1) = -1 Generative : 1 or -1 does change 0 via subtraction or addition by -1. What does this have to do with division by 0? Here we need to look at the values of 0 and 1 as points on a plane or a vector within a field. When we do this, we can easily see a bunch of patterns that emerge that are all connected through basic Algebra from Linear Equations and their Slopes to the actual Trigonometric Functions, the Dot Product, and more. We can consider the value of 1 as being the vector (1,0) The line segment from the origin (0,0) to this point (1,0) has a slope of 0. From the slope-intercept form of the line: y = mx+b we know that the y-intercept is b and that the slope defined as rise/run is m and can be calculated by any two points (x1,y1) and (x2,y2) on the line from the formula (y2-y1)/(x2-x1) = deltaY/deltaX. This will be important to keep in mind. We can take the two points (0,0) and (1,0) and plug it into the slope formula and we end up with a slope of 0. This is because this line is horizontal, or it is parallel with the horizontal. To keep things simple and throughout the rest of this we are going to fix the y-intercept to be 0. This will simplify the slope-intercept form of the line y =mx+b to simply y =mx. Where does the trigonometric functions come in? How does this show that division by 0 isn't undefined? If we take the point (1,0) which is also the value of 1. We know that it's slope m from the origin and its magnitude points is 0/1 which evaluates to 0. Let's either multiply this by -1 or subtract it by 2. What point do we end up at? In both cases, we end up at the point (-1,0). This also has a slope of 0. This subtraction of 1 by 2 to get to -1 is a horizontal translation where the multiplication by -1 is also a horizontal translation but is also a rotation. Multiplying by -1 is the same as rotating this vector about the origin (0,0) by 180 degrees or PI radians. Where does division by 0 come into play? What about the Trig functions? Okay what happens when we take this vector (1,0) and we rotate it by say 90 degrees or PI/2 radians; 1/2 the rotation of multiplying by -1? This has the same exact effect by multiplying by i where the head of the vector that extends from the origin (0,0) and points at (1,0) is now rotated with its tail fixed at (0,0) and it now points at (0,1). The original slope-intercept form of the equation with b equal to 0 for the line segment defined by (0,0) and (1,0) is simply y = 0 since y = 0*x + 0 = 0. The slope m is 0 as it has the fraction 0/1. The new slope of this line when it is rotated to (0,1) then has the y-intercept form of y = infinity simply because the slope has the form 1/0. This is Vertical Slope. It is tangent, perpendicular, orthogonal, normal to the x-axis. This is a 90 Degree or PI/2 rotation. It's the same thing as multiplying it by i. (continued...)
(...continued) Within mathematics we are typically taught about sets of numbers. We start off with counting, whole, integers, rational, irrationals, reals, imaginaries, and the complex numbers, etc. Instead of thinking of numbers within these different types of sets. Here I'm going to stick with Integer Arithmetic for simplicity and for the base cases, but this can be extended to floating point as well. Instead of referring to the 2D Cartesian Real plane or referring to the Complex plane. Instead, I propose to look at the x-axis as being the Horizontals (similar to the Real numbers within the complex plane), and the y-axis or i-axis as being the Verticals, or Perpendiculars, Orthogonals or Normals. And the entire field instead of being Complex simply referred as to being the Set of Rotational Numbers. Thinking of it in these terms will end up making things more intuitive as well as resolving many common issues or misconceptions within mathematics. I wouldn't change the common notation that is used within complex arithmetic such as 3 + 5i. This notation is still fine. However, instead of calling them complex or calling the i-values imaginary, I'd prefer to call them Rotational and Perpendiculars respectively. What does this have to do with division by 0 and what about the Trig functions? If we take this rotation of 90 degrees or PI/2 radians and divide it by 2, we end up having a rotation by 45 degrees or PI/2 radians. This arbitrary unit vector will now be at the point (sqrt(2)/2, sqrt(2)/2) and it has a slope of 1. This simply the line y = x. Here we can easily see that there is a direct correlation and relationship between the slope or gradient of a line and the tangent function. We can easily see that within the simplified slope equation: dy/dx is simply sin(t)/cos(t) = tan(t). The reason I state that the numbers are rotational is because they are not scalar as we are taught. They are actually multidimensional because they are rotational. Having this type of perspective and understanding becomes clearer when we understand this primarily due to Modulus Arithmetic: (Integer - Remainder Division). This is where we can see that Division by 0 is not undefined. It may be the case that it can be ambiguous, and it is purely ambiguous when we are only concerned with the Quotient and completely ignore the Remainder. Most operators within mathematics specifically binary operators typically have 2 operands and yields a single result, however when it comes to division, it doesn't yield one result, it in fact yields two results. We have the Quotient and the Remainder. So, what is division by 0? It is repeated subtraction, or repeated addition with the additive inverse or the Multiplicative Inverse Property implicitly applied. When we evaluate and or perform division to find the Quotient, we are asking how many times we can successfully subtract the Divisor or Denominator from the Dividend or the Numerator. Then depending on the temporary results, we have conditional checks that have to be applied. We check to see if the temporary result is less than the divisor and still greater than 0. If this is the case, then we can count up how many times we subtracted, and this becomes our Quotient, and the last temporary becomes our Remainder. In the case that the subtraction results to or is equal to 0, then we terminate, count up the number of times we subtracted where this is our Quotient, and our Remainder is 0 because we have an even or perfect division. There are some other cases to where the subtraction performed becomes negative, but that is beyond the scope of this. For the simple case of division by 0 we can look at 1/0 and try to perform long division: 1 -0 --- 1 -0 --- 1 ... Hmm we can successfully subtract 0 from 1 forever. This becomes an infinite loop where the counter of the loop, the Quotient Ramps Up to Infinity. It is also a No-Operation. Remember above: 1+0 and 1-0 Are Non-Generative? When we see 1/0 we can treat this as simply being either of the following: 1 + 0 + 0 + 0 + 0 ... where we continuously add 0 never changing 1. 1 - 0 - 0 - 0 - 0 - 0 ... where we continuously subtract 0 never changing 1. This is well behaved and well defined. What makes it ambiguous? When only considering the Quotient and completely ignore the Remainder we can see the following: 1/0 = Q = Infinity 2/0 = Q = Infinity 3/0 = Q = Infinity ... N/0 = Q = Infinity This is where the ambiguity comes in. We cannot undo this operation without more information. There's an infinite family of expressions that all tend to infinity. How can we resolve this ambiguity? It's quite simple! We Can Not Ignore the Remainder! Okay but how does division by 0 produce a remainder? Consider 1/0 from the long division. Our Quotient is ramping up to Infinity. However, for every single iteration of the subtraction loop, the Numerator or Dividend is left Unchanged since there is no transformation being applied. So, in the event of division by 0 we can see that the remainder is the Numerator. We can observe this new table where we do not ignore the Remainders: 1/0 = Q = Inf, R = 1 2/0 = Q = Inf, R = 2 3/0 = Q = Inf, R = 3 ... N/0 = Q = Inf, R = N Within Division by 0, other than the only exception being when the numerator is 0 as this does give us the indeterminate form 0/0 where this is simply the Origin (0,0) and that it is the Zero Vector a 0D Point where it is only relative to itself. This form of 0/0 is a point of rotation, a point of reflection a point of symmetry and so on. With this special case being considered, I then propose the convention that within division by 0 when the numerator or dividend is NOT zero that the Quotient itself will always be Infinity. The Remainder will always be the Absolute Value or Magnitude of the Numerator or Dividend and that the Quotient will take on the (Sign) of the Numerator or Dividend. For example: -42/0 = Q = -Inf, R = 42 With this type of convention, we can now easily distinguish division by 0 based on the remainder. Here the Quotient of being +/- Infinity strictly implies that this is Vertical Slope which is Perpendicular to the Horizontal. The Sign of the Infinity implies the direction we are heading Up or Down. The Remainder itself the absolute value of the Numerator or Dividend is simply where we are at on the vertical. How many steps or rungs up the ladder we are. This is division by 0. It is Not Undefined! Ambiguous sure it can be, but not always. Undefined? No! It's well defined and well structured. One of the misleading things to fully understanding this is to better understand what 0 really is. Zero itself is both even and an integer. Zero is Not Positive Nor is it Negative. And zero is also Not a Number. It is the Opposite of a Number. It is the Antithesis of a Number. It is Empty, Void, Null. In some sense Zero is a Type of Infinity but it's the Opposite of Infinity. Consider the Tangent Function at 90 Degrees. There's a vertical asymptote that tends off to both + and - infinity. This section of the tangent function (within this particular period) is a smooth and continuous curve that goes on forever. And this pattern repeats itself horizontally. So here, when we have values or functions that tend towards +/-infinity away from zero these are Divergent. When we have things that tend towards 0 or some other discrete value which is pointing to that value which happens to be a 0D Arbitrary Point are Convergent. We just have to consider the way we've been looking at perceiving and treating 0, and how it associates with all other non-zero values based on some operation that may or may not result in a transformation. Every non-zero value is unequivocally without a doubt, orthogonal, perpendicular, normal, tangent to zero, they are 90 degrees separated. This is why we see a state change of the vector (1,0) which has a slope of 0/1 to becoming (0,1) with a slope of 1/0 when we rotate it by 90 degrees or PI/2 radians. This is the exact difference in horizontal translation between the Sine and Cosine functions in which the Tangent function is defined as Sine/Cosine. And there you have it! Division by 0 is NOT UNDEFINED! I Just Now Defined It and Demonstrated It! All Just Food For Thought! Challenge Everything! Question Everything, Put It To The Test!
This series is really helpful and clear. Thank you !!!
Note: Floats: not associative (a +(b+c) != (a+b) + c) but yes a + b == b + a. Reason: representation of floats in memory, a discrete representation of a continuous value Anyone in the comments mind to elaborate on that?
I think what he meant is that, due to the inevitable rounding errors when representing irrational numbers with a finite number of bits, associating numbers in different ways may lead to different results. I can't come up with an example off the top of my head, though.
Here is a classic example of the challenges with floating point rounding. a = 10^(15). # large number b = 10^(-15) # small number. (a+b)-a == b # evaluates to false. (a+b) is rounded to a and then subtracting a gives 0.0 which is not equal b. (a+b) == a # evaluates to true since a+b is rounded to a.
rounding error of floating point numbers
Thanks man!
Using threadid is not necessarily thread-safe. Please refer to the official Julia blog titled: PSA: Thread-local state is no longer recommended.
👏 Thanks for the series, and greetings from University of Michigan!
👏
function insertintosorted!(numbers, num) n = length(numbers) startpoint = 1 endpoint = n + 1 while endpoint - startpoint > 1 midpoint = (startpoint+endpoint)÷2 if num < numbers[midpoint] endpoint = midpoint elseif num >= numbers[midpoint] startpoint = midpoint else insert!(numbers, midpoint, num) return end end insert!(numbers, endpoint, num) nothing end
This is a fantastic introductory course, many thanks for sharing it 👌