Mainframe Cobol built to last decades - M170

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024
  • НаукаНаука

Комментарии • 81

  • @LandNfan
    @LandNfan Год назад +5

    I’m a self-taught programmer who started in COBOL in 1975 on an IBM System/3 Model 10. I spent the next 20 years writing COBOL business applications on the System/3, a Honeywell Level-62, and a Wang VS300 (the best kept secret in the IT business). Then in 1995 I was thrown in to sink or swim in the world of Unix (AIX), Pick D3 database, and the SB+ 4GL and never looked back. Retired in 2009 and haven’t written a single line of code since. I was even fortunate enough to meet Grace Hopper and hear her speak on two occasions. Somewhere in the back of a desk drawer I still have an official Grace Hopper “nanosecond”, an 11” piece of wire she used to illustrate her talk.

  • @133m4n
    @133m4n 4 года назад +11

    For more than 30 years, I am still programming COBOL 74 on Unisys mainframe. Many programs written 30 years ago are running today.

    • @armanikayden3337
      @armanikayden3337 3 года назад

      i dont mean to be offtopic but does any of you know a tool to log back into an Instagram account..?
      I stupidly lost my login password. I appreciate any tips you can offer me!

  • @estpst
    @estpst 4 года назад +8

    My favorite programming language

    • @MBrieger
      @MBrieger 4 года назад +1

      Problem is that you should use the Language appropriate for the task.
      On the AS/400 or iSeries you can combine any language into one program.
      Data entry with Cobol, processing with C, reporting with RPG.

  • @mightwilder
    @mightwilder 4 года назад +9

    would be great if you manage to write basic create/read/update/delete example in cobol and expose it in modern form as rest webservice to show that old machine could service thousends of request compared to modern clr or jvm solution

  • @jeffm3045
    @jeffm3045 4 года назад +6

    To me what is most amazing is that, as Moshix has demonstrated here, we can still take those 40+year old digital artifacts TODAY - ancient compilers, linkage editors, operating systems, etc and with the magic of emulators, can STILL build and innovate on them - we have a Y2K compliant version of MVS 3.8, and more and more modern tooling is being put on top of it with each iteration of the TK4 - I look forward to the day when we have a working database for MVS 3.8!! It's really too bad that IBM doesn't see the light and would help to encourage these efforts rather than stifle them -- These efforts, more than any advertising, DEMONSTRATE the SOLID FOUNDATIONAL QUALITY of the S/360 architecture and its descendants. They really need to release XA to the community and, again, just see what can be done with it - I think such a move would only help generate more interest and good publicity for their mainframes today.

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад +1

      Hear hear. Agree with every word

    • @timothykeith1367
      @timothykeith1367 3 года назад +1

      @@moshixmainframechannelI did COBOL years ago. Is Hercules good enough to become familiar again with IBM mainframes? I don't see many COBOL jobs locally, but it never goes away. At my age I'd actually be okay with a "dead end" job in IT. I mostly work with Linux. Seems like the jobs are going to HIB visa holders.

    • @moshixmainframechannel
      @moshixmainframechannel  3 года назад

      Yes Hercules with MVS 3.8 is very good for Cobol

  • @robertmaclean7070
    @robertmaclean7070 3 года назад +2

    Thank you. So many people dont know and they dont know, they dont know.

  • @misium
    @misium 4 года назад +6

    Could you point us to some introduction to working (and programming) with/on mainframes? Workflow options, environments.

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад +2

      Yes. Watch this channel.

    • @boxedowl
      @boxedowl 4 года назад +1

      You can sign up for a free learning account to follow along.
      github.com/openmainframeproject/cobol-programming-course
      developer.ibm.com/videos/cobol-programming-with-vscode-course-launch-webinar/

  • @lcarliner
    @lcarliner 3 года назад +1

    There is one exception to full backward compatibility. It is with CICS applications. Later COBOL compiler introduced the “ADDRESS OF” statement that essentially replaced the address cell workaround mechanism. As CICS was upgraded, older COBOL sources that were not updated to is the “ADDRESS OF” construct were experiencing intermittent failure!

  • @durrcodurr
    @durrcodurr 8 месяцев назад +2

    Linux programs can be pretty binary compatible too, like you can run 32 bit programs on 64 bit OSes. The easiest approach is to use static linkage, so the libraries you use are linked into the program executable. With dynamic linkage, chances are the names of libraries and/or functions or the data structures they used have changed over time. On FreeBSD, it's possible to run Linux binaries (I did that myself many years back). There's also a version of Debian that is based on FreeBSD (Debian/kFreeBSD). One thing to watch out for when trying to write upwards compatible software is the generated machine code. CPU instruction sets on Intel/AMD CPUs are expanded and modified all the time, and once you compile for a particular CPU, you're tied to that CPU and compatible successors. By default, compilers usually choose a 'generic' build that should be upwards or sideways compatible (but doesn't make use of more recent features of the CPU). The converse of that is when you make a 'native' build that is optimized for the machine the compiler is running on. You'll get maximum performance at the cost of portability of the binary. C programs in source code form can be pretty much portable too, if you stick to calling only upwards compatible functions. GCC still can compile old K&R programs, and it can also compile programs written for any of the ANSI standards that have been released (see the info pages on the -std compiler option). If you want to, you can even write upwards compatible assembly code that runs on older CPUs as well as modern CPUs (if you're careful with system and library calls). Intel/AMD CPUs have their own way to preserve upwards compatibility in the instruction set.

  • @AnonEMuss-gw8fm
    @AnonEMuss-gw8fm 4 года назад +6

    As I commented on your last video, I do believe there are workloads where a mainframe is the best -- or only -- solution. I don't believe a state unemployment system is one of these. It used to be, but isn't now. A "scale out" system running on AWS, Azure, Google Cloud, etc. would be sufficiently reliable, and able to handle demand surges.

    • @bartas9693
      @bartas9693 4 года назад +1

      That's a point of disagreement between us. It's the only sufficiently low level system with that kind of track record for reliability. There's nothing wrong with plugging in AWS and other layers on top, but you can't replace the low level and keep the reliability.

    • @VictorMartinez-zf6dt
      @VictorMartinez-zf6dt 4 года назад +2

      Bartas Yes, you can. Hundreds of businesses have no mainframe whatsoever, especially the new ones or those who have transitioned, and have as much reliability or better as the mainframes. It’s about the architecture that’s built.

  • @b43xoit
    @b43xoit Год назад +1

    Some of us had reflexive animosity toward everything IBM since freshman year, but we owe credit where credit is due.

  • @erickcoelho408
    @erickcoelho408 Год назад +1

    Great video man, tnks to share with us !

  • @Centar1964
    @Centar1964 4 года назад +3

    I have a COBOL compiler for a Commodore 64 from 1984 that still runs under an emulator (VICE on a C64 Mini).
    Going back further I have a COBOL compiler for an Altair 8800 that still runs under an emulator (Altair-Duino) and that's from the mid 70's.
    Another of COBOL's strengths is it's cross platform ability...

  • @martinfj
    @martinfj 4 года назад +1

    Interesting video! I have heard that there are some updates that needs to be done at regular intervals to larger systems, and that IBM gives companies long time spans to do the upgrades to their code. I have created a programming language and system that I believe will have even more longevity and durability than COBOL and the mainframe (which has a very high longevity and durability!) The system I have created is called progsbase. This has been achieved by having few, useful and widely known buildings blocks in the programming language.

  • @carlosloaizasanchez8998
    @carlosloaizasanchez8998 4 года назад +2

    Programe cobol As/400 IBM en BUAP en 1992 hace un buen rato y lo volví a programar en un rp Banner, Cobol con SQL Incrustado 1995 Oracle, unix, un buen lenguaje que calificaban como obsoleto, pero muy bueno de alto rendimiento

  • @billchatfield3064
    @billchatfield3064 4 года назад +3

    I love your videos. This is not a criticism, just a comparative observation that might be helpful. :-) I think the batch workflow makes the mainframe seem silly and ancient. Because every time you want to run a program you have to change the JCL, submit a batch job that recompiles it every time, and then you have to go find the output in a list of jobs, select it and then scroll past all the steps you don't care about. On Linux/Windows you just type the name of a program to run it and see the output. This makes the mainframe seem very inefficient because it requires a bunch of unnecessary steps just to run a program. We know those steps are unnecessary because the computers we use every day don't require them. So, if you want to make the mainframe look good, you should show how it supports a workflow that is even up to par with Linux/Windows. For example, how can someone run a custom program efficiently, meaning without using batch mode and without having to recompile it every time? Also, how can you use the command line interface like you do in Linux or Windows (cmd)? Menu-driven interfaces are for pointy-haired managers and secretaries. ;-) There should be a way to do the same thing from the command line where you can automate repetitive steps with a script.

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад +4

      You can do that interactive compiling one the mainframe also. I just like to do it this way

    • @arnabbera2562
      @arnabbera2562 4 года назад

      You probably have never heard of Online Transaction Processing (OLTP). CICS is used everywhere including ATMs for interactive processing

    • @brucehewson5773
      @brucehewson5773 4 года назад +1

      Imagine an enterprise that has data centers in multiple separate regions of then world. Then imnagine that each data center has an IBM mainframe complex of multiple z15 processor boxes, connected to many thousand of disk devices, running multiple Z/OS partitions, all running both batch and OLTP tasks concurrently. The OLTP tasks could be the "ancient" CICS or IMS products, or be the modern Web Servers, or just be the Web API's accessing OLTP or BATCH or DATABASEs or DISK devices. The disk data is transparently transferred to other data centers in different countries for Disaster recovery. Tape storage can also be used, with Tape Library Grids mirroring tape based data across those data centers at the same time. This is the world on enterprise computing. When your industry is based on discrete time boundaries for it's processing, like daily cutoffs for bank branches, the concept of batch processing still works exceptionally well. Using scheduling software that monitors the success of each batch process before releasing the next, any problems found can be quickly identified to the specific task that had an issue, recovery and corrective tasks can be completed, and the whole shebang can be continued, usually without exceeding the required Service Level Agreement completion time. These batch job streams can contain many individual jobs, and there can be many parallel jobs streams, each designed to handle the processing for an individual business line, an individual application, or maybe just for some housekeeping. This is not about one individual programmer coding up a program and running one job to compile/link/run the program, but for an organization to conistently, day after day, week after week, repeat the same daily, weekly, monthly, quarterly, yearly job streams as required by the enterprises business entities, but also by the various regulators that monitor the business.
      In regards to command line interface (in Z/OS), for developers you could go back to the 1970's and just use the TSO READY prompt. But the menu driven ISPF interface does provide a command line interface on every panel. So you can issue a TSO command anywhere. For some things being able to SUBMIT a batch job to go do work without disturbing my correct online session makes marvelous sense. And also, you could just use SSH to access the UNIX "side" of Z/OS and do all your command line wizardry there.

  • @donobrien1841
    @donobrien1841 4 года назад +2

    The COBOL language is just one aspect of mainframe application. I was once on a project that converted a batch COBOL application to Java to run on Linux. Converting COBOL to procedural looking Java program was mechanical, however along the way there were performance issue due to the way Java uses memory and that was somewhat fixable. However the biggest and still the problematic area was storage, the mainframe has DFSMSdf and datasets or very structured storage systems. There is nothing like that on Linux. I could not find any sort of structured file facility on Linux, sure there is emulated sort of things using directories for GDG (Generation Data Sets) the real killer is all the utilities for for maintaining, viewing and editing datasets. The utilities on mainframe let you look and edit datasets with binary data. How many editors on Linux will let you efficiently view and edit a file with binary data in a 12 GB file efficiently? This was not a complicated system just batch cobol that processed datasets, no fancy UI or web interface. I do not know if they completely moved the application to Linux, but in my opinion it was not worth investment. May be there are storage management systems for Linux that rival the mainframe. If so please reply with URL links to these options.

  • @NotMarkKnopfler
    @NotMarkKnopfler 4 года назад +3

    Hi Moshix, I have been watching your videos on COBOL and Mainframes and enjoying them very much. I'm interested in learning COBOL. I'm about to install gnuCOBOL but I was just wondering if it's possible to run the mainframe environment on a modern PC via emulation, and if so, what software choices there are? Assuming it is possible, it might make a good video. Many thanks.

    • @bbuggediffy
      @bbuggediffy 4 года назад +2

      Yes. See the Hercules project.

    • @grappydingus
      @grappydingus 4 года назад +2

      Installing a mainframe emulator on Linux: ruclips.net/video/N6sK_BhVD8g/видео.html On Windows: ruclips.net/video/QTxe8ASdxE0/видео.html

  • @b43xoit
    @b43xoit Год назад +1

    If I remember correctly, PL/1 doesn't have any reserved words. I believe its punctuation separates the language words from the programmer words so that there is no ambiguity even without reserving names that could otherwise potentially be used as identifiers.
    Javascript is in a middle ground in that regard. For example, `class` is a reserved word in the context of variable names, but is allowed as the key of a member of an object.
    const class; /* illegal */
    const t = {class: "conciousness"}; /* legal */

  • @n2185x
    @n2185x 4 года назад +1

    Since the output on the modern mainframe looked like it was lined up correctly and didn't show the formatting problems that you saw under TK4, I have to wonder if you're running into the problem that RFE sometimes seems to have with output.
    There are a lot of things that people these days seem to believe were invented later than they were. The virtualization you mentioned is one of the big ones. Many think it didn't exist until products like VMware came onto the scene, but IBM was doing virtual machines back in the early 70s. Virtual memory, memory protection, etc., were all in existence in the mainframe world long before they arrived on the microcomputing scene. It's not that much of a surprise, really, since microcomputing hardware has essentially followed in the footsteps of the mainframe.
    These days, though, the thing that sets mainframes apart is their fault-tolerant construction (yielding bulletproof reliability) and the sheer amount of I/O they're able to handle. People in the server world try to fake the reliability angle by limiting the architecture of the software stacks that are in use (in particular, by adopting specific architectures that allow requests to be served up from multiple independent running instances) and using a "cloud computing" paradigm, but mainframes do it for real with seamless failover and hot-swappable everything (even CPUs are hot-swappable if I'm not mistaken).
    You do pay a pretty penny for it all, though...

  • @richardbennett4365
    @richardbennett4365 8 месяцев назад +1

    I am not surprised the source code from 50 years ago can run on the z machine.
    What about copying over the object files or the executable complied on the 360 machine and demonstrating they can be read and/or run on the z machine?
    It would be more impressive.

  • @jms019
    @jms019 4 года назад +1

    This is the only thing IBM really have. The ability to maintain compatibility over decades. Everything they've done recently apart from some of their hardware research is almost worthless. But their speciality is not about the language as any modern compiler can compile something in an old language that only reads and writes files. It's the environment and how it connects to the outside whether through databases, messaging or something else

  • @obsidian9998
    @obsidian9998 4 года назад +3

    How does Cobol deal with the difference with clock speeds of newer hardware. Or the emulator accommodates for such conversion issues.

  • @QuasarRedshift
    @QuasarRedshift 4 года назад +1

    I would really like to know if you ever had any experience with Unisys mainframes - and can you do a few videos on their machines and operating systems . . .

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад +2

      I have played with Unisiys mainframes but it’s like they speak Urdu and I answer in danish.

    • @QuasarRedshift
      @QuasarRedshift 4 года назад +1

      @@moshixmainframechannel LOL

  • @jamesfehlinger9731
    @jamesfehlinger9731 4 года назад +3

    Ha! The Cobol prime-number generator program that you
    bring up on MVS at 5:45 (with Juergen Winkelmann's name
    on it) is the exact same program that I grabbed from
    your Web site and modified back in 2017 to run as
    a demo program on Rob Storey's IBM 7094 emulator,
    which you can see at 15:36 in:
    ruclips.net/video/4xaBS6pWrG0/видео.html
    The 7094 version still has Juergen's name on it, too.
    I didn't presume to add my own name, but it **did**
    need a bit of modification to run under IBSYS
    on the 7094.
    So the backward compatibility may well extend all
    the way back to the beginning of the IBM 360 line,
    but it doesn't **quite** go all the way back to the
    beginning of the language.
    ;->

  • @lcarliner
    @lcarliner 3 года назад

    I thought that the virtual memory system, that is the basis for virtualization was the Burroughs B5000 and B5500 family and up. The MIT Multics based on the GE 600 series hardware has many derivatives, such as UNIX, and supermini computers from and Digital 10 and VAX systems. It is true that hardware virtualization was pioneered by IBM, first realized in the IBM system 360-67 model.

    • @moshixmainframechannel
      @moshixmainframechannel  3 года назад

      More or less. MULTICS and UNIX are only relayed by lineage. The Atlas system was the first hardware supported virtual memory system, not the IBM machine

  • @MarquisDeSang
    @MarquisDeSang 4 года назад +1

    I want to know more about FLOW-MATIC

  • @bernardo5758
    @bernardo5758 Год назад

    I know it's a bit unrelated, but how did you set up z/OS and your terminal emulator to show so many lines such as in the timestamp 11:47 ?

  • @obsidian9998
    @obsidian9998 4 года назад

    Is there compalitiy issues with any modern OS systems like Linux, Apple, FreeBSD etc....

    • @obsidian9998
      @obsidian9998 4 года назад +1

      I see that it is running fine with Window from the footage.

  • @jasonsdodd
    @jasonsdodd 4 года назад

    I agree with the spirit of this video but it's a bit misleading. Practically speaking if you're going to compile a cobol program from the 70s or 80s, there's a good chance it will need some minor code changes to compile with version 4 or newer COBOL compilers. Some programs will almost certainly need some changes if you're talking about compiling a whole system with today's compiler.

    • @n2185x
      @n2185x 4 года назад

      Are the new, modern compilers not 100% backwards compatible with the older compilers? At the very least, I'd think there would be some parameters that can tell the new compiler to compile based on the older standard.

    • @jasonsdodd
      @jasonsdodd 4 года назад

      @@n2185x No. Things deprecate. And there are reasons you would want to compile with the new version.

    • @n2185x
      @n2185x 4 года назад

      @@jasonsdodd - I realize that usually some things will be deprecated over time, but mainframes seem to be something of an exception to that at least to some degree. Could you name some specific capabilities or features of the old COBOL compilers that are no longer available in the current ones on IBM mainframes?

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад

      Ok

    • @aniketmore1900
      @aniketmore1900 4 года назад

      Hi @Jason Dodd, greetings. Is this you from Humana ?

  • @Johan-ez5wo
    @Johan-ez5wo 4 года назад

    For those on TK4, to play with Cobol and KICKS, please see my blog:
    idiotmainframe.blogspot.com/2019/07/welcome-to-emulated-world-of-ibm.html

  • @warrenvanwyck2765
    @warrenvanwyck2765 4 года назад +3

    Will IBM ever learn from Red Hat (their acquisition)? Allow developers and hobbyists to run a license-free and open IBM OS, compiler, and other software on Hercules. IBM lost the mind-share competition so long ago with their closed everything policies -- including OCO over thirty years ago. Seeing that you're running VM/370 and MVS 3.8 is nice for a computer museum -- not the real world.

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад +1

      Okay. Sorry

    • @warrenvanwyck2765
      @warrenvanwyck2765 4 года назад +3

      @@moshixmainframechannel Hi, moshix, nothing for you to be sorry about ... IBM is sorry. You and many others have been working on this for years geronimo370.nl/blog/2019/06/18/a-sad-day-for-free-xa/ Hope springs eternal ... maybe the new CEO will see the light.

    • @djohnsto2
      @djohnsto2 4 года назад +3

      It's a great example of how corporate philosophies can be hard to change. Its human originators are long-gone, but their zeitgeists live on.

  • @AnonEMuss-gw8fm
    @AnonEMuss-gw8fm 4 года назад +2

    The ability to run 50 year old binaries on new hardware is a result of the CPU architecture and operating system. It's a property of the IBM mainframe ecosystem generally, not of COBOL specifically.
    A separate question is how important running 50 year old programs really is. IMHO, this is more of a theoretical benefit than a practical one. I'm sure there are some 50 year old binaries still running today, but I suspect it's a small number. That's not to say that the work they're doing isn't mission critical -- I'm just saying there aren't many of these situations.
    P.S. I still run a text editor on Windows 10 that was built in 2001 (19 years old). Backwards compatibility exists on X86 too.

    • @moshixmainframechannel
      @moshixmainframechannel  4 года назад

      Ok

    • @AnonEMuss-gw8fm
      @AnonEMuss-gw8fm 4 года назад

      ​@@statinskill I'm pretty sure the AS/400 isn't 50 years old. :-) But yeah, being able to run old code on new hardware is important. That ability exists in the X86 world too, just to a lesser degree. IBM takes "back compat" over long time frames much more seriously.
      I still question how many situations really exist today where it makes sense to run software that hasn't changed in 50 years. Just because you can doesn't mean you will. I know of few applications that haven't had some change in requirements over 10 years, let alone 50. Once you make a change and recompile, the clock resets on how old that application's binaries are.

    • @noname4422
      @noname4422 4 года назад +1

      @@AnonEMuss-gw8fm You can go even further than 19 years on x86. OS-wise, you can run 32-bit Windows 95 programs from 25 years ago on the latest version of Windows 10, even Windows 3.x programs from 30 years ago on 32-bit versions of Windows 10, which are still officially supported. Once you get virtualization involved, you can get a modern x86 machine running MS-DOS in a VM (just virtualized, not emulated) and run software from 1981 meant for the original IBM PC, as long as it doesn't get too picky about CPU timings. And although I can't think of any notable x86 software that predates the IBM PC, theoretically you could use virtualization to run anything from 1978 onwards, since that's when the 8086 was launched.

  • @misium
    @misium 4 года назад

    It's all fine it itself that it can run "century" (soon) old binaries and code. It is also the reason EVERYONE hates to work with it.

    • @robertlaw8510
      @robertlaw8510 3 года назад

      Academia has always had a problem with IBM and has perpetuated the idea that EVERYONE hates to work with it. Business has used IBM mainframes and mini-computers for years to run their businesses. If they didn't like it and their was something better, they would have moved to it by now.