15:59 Has anybody noticed that the meaning of the word "platform" has changed in recent years? So the term "platform" as defined in the book "Team topologies" is different from what it used to mean a couple of years ago, and the speaker tends to confuse both meanings. Specifically, the term "platform" in "platform teams" no longer means platform as in "PaaS". Rather, anything that can be provided as a service over the network counts as a platform nowadays. So not only PaaS but also IaaS (Kubernetes etc), SaaS, DBaaS and APIaaS (web APIs etc) are all possible outputs of platform teams.
You can find the center of gravity for any trend or new direction in tech based on which terms get overridden and abused the most. At some point utility will demand the language be enhanced and the ambiguity will dissolve. Then you'll have reached the commoditizing threshold and the available energy will get redirected.
Finally, a coherent explanation of the central role of platform engineering. This describes not only the tooling or components but provided a very clear functional description of how things are glued together. Valuable and clear insights. Kudos!
Thanks for the session, it’s interesting and very applicable. The only concern here is that the title is misleading. It reads as a video about platform engineering as a generic pattern, but in fact is fully focused specifically on cloud.
One thing I didn't quite follow at around 13:50 is when Armon talks about an "escape hatch" where the platform is too limited, and we can drop down in to IaC. What might that mean in practise? Does it mean that application teams write their own little bits of terraform code to match their specific/unusual needs, rather that using the usual platform-provided modules?
You have it about right, @jonburgess8979 . The PaaS-style abstraction removes much of the detail of the underlying infrastructure from the consumer. However, use cases arise that require some fine-tuning. Providing the ability to drop into Terraform's private module registry, as an example, import and modify modules, then plan and apply the deployment, provides users with that additional control when needed. Key to making this approach successful in regulated environments is clean policy code to vet that the customizations do not introduce risk, compliance, or cost challenges.
The icing on the cake is a Platform Portal as a self Service portal where everything is connected. The leverage is in creating baselines and templates for applications and services to reduce the technology variety and standardise how things are done. So a Developer Team for example requests a new Go backend Service and a fresh app with the helo world template is deployed. The whole setup is done automatically. The devs only need to get into the newly created git repo and start coding. Backstage is a good example but I expect more to come as it has its own flaws.
What a wonderful video, very well put together and very well explained information. Thank you for the shared knowledge. For anyone studying this video, it was a lesson.
The funny part is that the engineers at the bottom always know that phase 1 is going to happen and is going to be a mess, but mostly higher ups don't get that information handed to them.
Platform Engineering are usually a part of a Platform Team which are in charge with creating an internal developer platform for their organization. They maintain DevOps tooling and infrastructure to make sure everything is running smoothly. Integration Engineers usually have a very similar role, and can be used synonymously, however some roles focus more on the API and application side of integrations for the organization. For both roles, having the knowledge on how to integrate third party tools and multi-product/cloud environments is crucial.
It seems on the road to industrialize the cloud adoption seems like building a complete ecosystem would be the core value of platform team, as you mentioned they have to provide IaC, PaaS, Observability, CI pipelines. Also adding to the list: applying GitOps practices, and insuring users access and the platform (runtime) in it self is compliant with security best practices.
I'm an old school IT guy, rebooting myself in cloud. But man, it amazes me how we have come full circle. We are just using new tools and bringing software defined infrastructure to deprecate the old PS scripting and console UI way of life. A central "platform" team to deliver services to app devs, is nothing new. I guess they realized having full stack control, was too burdensome after all.
One piece I would be interested to know more about is how the IaC - TF - layer could be standardized as a re-usable block across different applications? For example, one app could be running on a single EC2 in AWS and another could be running on an Auto Scaling Group or ECS/Fargate with custom configs specific to the app. Care to elaborate?
The platform team (optimally) looks to apply a standard approach to deployment across clouds, and can certainly do so with HashiCorp’s ‘golden workflow’ approach. Packer provides a Core Image template that can be modified through different channels to land in the appropriate environments, while Terraform invokes different modules and providers to deliver to different environments. The hope of this approach is that TFC and Packer greatly simplify the application team’s approach to deploying the same/similar artifacts to various clouds or platforms. For more on the ‘golden workflow’ idea, see www.hashicorp.com/resources/build-a-golden-image-factory-with-hcp-packer-terraform As an example, some highly regulated firms will prove their automation and deployment on private infrastructure, and show auditors how that same automation will be in place in a public cloud environment, even though they are different services or platforms. This ‘de-risks’ the move to an experimental environment as the controls are all the same. The key to this approach is consistency in workflow, compliance, cost controls, audit, and more regardless of the landing spot of the application.
Great video and explanation. Where does data live and the responsibility of that data live in regards to the platform teams? I speak to these teams in their different guises and depends on how and where that responsibility lands. But my opinion is that a database is part of that app. As soon as the app is in production and serving data then there is a requirement to protect that data against all the failure scenarios. Sometimes this is also landing on the security team. But regardless of platform used data protection and management needs to land somewhere. I would love to see a reply on that from you?
In my opinion you still need a Service/Application/Product Owner that is the owner of the app and its data and is accountable for all aspects of it, this includes accountability for data protection and longterm management. You cannot dump accountability on a central team.
@@lobofrags I think it depends on how data protection is handled. It’s much easier to define policies centrally but then on the flip side an app owner knows their app better than the platform team. In my role I see both. I think you could almost have data protection in a similar way to the observability box in the video. But it’s going to come down to many factors on where the responsibility is
Generally, what platform teams work on are multiple applications that we are collectively referring to as a ‘platform.’ Terraform, Vault, Consul, and Boundary, as an example, may represent pieces and parts of a platform that serves developers through some form of abstraction (Waypoint, perhaps). The point is that these various parts may have data concerns from telemetry, logs, and audit perspectives. The platform team is certainly responsible for protecting this data. Now from an application data protection standpoint, the application teams generally own their data (if following 12-factor application strategies), but the platform team may be looking to push standards for effective data protection. As such, they may drive standards around data encryption and protection. For example, social security numbers or financial data should pass through these specific Vault APIs for transformation/encryption. In this model, the application team is responsible for consuming the standard, but the platform team is responsible for providing the standard service at scale (the encryption services in this example). To learn a bit about providing this particular service: www.hashicorp.com/products/vault/advanced-data-protection
You guys spying on us? You're more or less describing how our operation has organized itself over the last 5+ years. But...we have yet to crack the 'Architecture' problem. Agile has poisoned the design phase beyond recognition. Not to argue for waterfall, or any of the other sclerotic enterprisey process monstrosities that strangle innovation, but where are these supposed standards and "templatized" platform product designs supposed to come from? (rhetorical question, natch) All the shiny innovations of the last decade just assume that piece is getting handled by really really smart folks. News flash: it's not getting handled. It's GIGO, only now with CI/CD, IaC, and even *more* tech debt.
You can't blame agile for messing up your architecture. You can, however, blame a guy named Conway 😊 Conway's law states that organizations build systems that resemble the organisation's communication structure. If your organisation doesn't have established, defined, well followed processes by which business is conducted on a day to day basis, any IT system built to support its ad-hoc irregular processes will be a mess, agile or no agile.
Armon is the real one, already super rich post IPO but still teaches on the white board like he used ❤
15:59 Has anybody noticed that the meaning of the word "platform" has changed in recent years? So the term "platform" as defined in the book "Team topologies" is different from what it used to mean a couple of years ago, and the speaker tends to confuse both meanings. Specifically, the term "platform" in "platform teams" no longer means platform as in "PaaS". Rather, anything that can be provided as a service over the network counts as a platform nowadays. So not only PaaS but also IaaS (Kubernetes etc), SaaS, DBaaS and APIaaS (web APIs etc) are all possible outputs of platform teams.
You can find the center of gravity for any trend or new direction in tech based on which terms get overridden and abused the most. At some point utility will demand the language be enhanced and the ambiguity will dissolve. Then you'll have reached the commoditizing threshold and the available energy will get redirected.
@@scott555 Hard to make your comment any more cryptic (notice I didn't say nonsensical) than what you've managed to do, congrats.
@@dogaarmangil Doing my best out here. Pleased to hear it doesn't go unrecognized.
Finally, a coherent explanation of the central role of platform engineering. This describes not only the tooling or components but provided a very clear functional description of how things are glued together. Valuable and clear insights. Kudos!
Thanks for the session, it’s interesting and very applicable. The only concern here is that the title is misleading. It reads as a video about platform engineering as a generic pattern, but in fact is fully focused specifically on cloud.
I really like the way that he shares knowledge, keeping it clear and simple.
One thing I didn't quite follow at around 13:50 is when Armon talks about an "escape hatch" where the platform is too limited, and we can drop down in to IaC. What might that mean in practise? Does it mean that application teams write their own little bits of terraform code to match their specific/unusual needs, rather that using the usual platform-provided modules?
You have it about right, @jonburgess8979 . The PaaS-style abstraction removes much of the detail of the underlying infrastructure from the consumer. However, use cases arise that require some fine-tuning. Providing the ability to drop into Terraform's private module registry, as an example, import and modify modules, then plan and apply the deployment, provides users with that additional control when needed. Key to making this approach successful in regulated environments is clean policy code to vet that the customizations do not introduce risk, compliance, or cost challenges.
The icing on the cake is a Platform Portal as a self Service portal where everything is connected.
The leverage is in creating baselines and templates for applications and services to reduce the technology variety and standardise how things are done. So a Developer Team for example requests a new Go backend Service and a fresh app with the helo world template is deployed. The whole setup is done automatically. The devs only need to get into the newly created git repo and start coding.
Backstage is a good example but I expect more to come as it has its own flaws.
What a wonderful video, very well put together and very well explained information. Thank you for the shared knowledge. For anyone studying this video, it was a lesson.
thanks for the information
The funny part is that the engineers at the bottom always know that phase 1 is going to happen and is going to be a mess, but mostly higher ups don't get that information handed to them.
What is a Platform & Integration Engineer role that I’m seeing nowadays ?is it combo role
Platform Engineering are usually a part of a Platform Team which are in charge with creating an internal developer platform for their organization. They maintain DevOps tooling and infrastructure to make sure everything is running smoothly. Integration Engineers usually have a very similar role, and can be used synonymously, however some roles focus more on the API and application side of integrations for the organization. For both roles, having the knowledge on how to integrate third party tools and multi-product/cloud environments is crucial.
It seems on the road to industrialize the cloud adoption seems like building a complete ecosystem would be the core value of platform team, as you mentioned they have to provide IaC, PaaS, Observability, CI pipelines. Also adding to the list: applying GitOps practices, and insuring users access and the platform (runtime) in it self is compliant with security best practices.
Great explanation. very well done.. thanks!
I'm an old school IT guy, rebooting myself in cloud.
But man, it amazes me how we have come full circle. We are just using new tools and bringing software defined infrastructure to deprecate the old PS scripting and console UI way of life.
A central "platform" team to deliver services to app devs, is nothing new. I guess they realized having full stack control, was too burdensome after all.
One piece I would be interested to know more about is how the IaC - TF - layer could be standardized as a re-usable block across different applications? For example, one app could be running on a single EC2 in AWS and another could be running on an Auto Scaling Group or ECS/Fargate with custom configs specific to the app. Care to elaborate?
The platform team (optimally) looks to apply a standard approach to deployment across clouds, and can certainly do so with HashiCorp’s ‘golden workflow’ approach. Packer provides a Core Image template that can be modified through different channels to land in the appropriate environments, while Terraform invokes different modules and providers to deliver to different environments. The hope of this approach is that TFC and Packer greatly simplify the application team’s approach to deploying the same/similar artifacts to various clouds or platforms. For more on the ‘golden workflow’ idea, see www.hashicorp.com/resources/build-a-golden-image-factory-with-hcp-packer-terraform
As an example, some highly regulated firms will prove their automation and deployment on private infrastructure, and show auditors how that same automation will be in place in a public cloud environment, even though they are different services or platforms. This ‘de-risks’ the move to an experimental environment as the controls are all the same. The key to this approach is consistency in workflow, compliance, cost controls, audit, and more regardless of the landing spot of the application.
Great video and explanation. Where does data live and the responsibility of that data live in regards to the platform teams? I speak to these teams in their different guises and depends on how and where that responsibility lands. But my opinion is that a database is part of that app. As soon as the app is in production and serving data then there is a requirement to protect that data against all the failure scenarios. Sometimes this is also landing on the security team. But regardless of platform used data protection and management needs to land somewhere. I would love to see a reply on that from you?
In my opinion you still need a Service/Application/Product Owner that is the owner of the app and its data and is accountable for all aspects of it, this includes accountability for data protection and longterm management. You cannot dump accountability on a central team.
@@lobofrags I think it depends on how data protection is handled. It’s much easier to define policies centrally but then on the flip side an app owner knows their app better than the platform team. In my role I see both. I think you could almost have data protection in a similar way to the observability box in the video. But it’s going to come down to many factors on where the responsibility is
Generally, what platform teams work on are multiple applications that we are collectively referring to as a ‘platform.’ Terraform, Vault, Consul, and Boundary, as an example, may represent pieces and parts of a platform that serves developers through some form of abstraction (Waypoint, perhaps). The point is that these various parts may have data concerns from telemetry, logs, and audit perspectives. The platform team is certainly responsible for protecting this data.
Now from an application data protection standpoint, the application teams generally own their data (if following 12-factor application strategies), but the platform team may be looking to push standards for effective data protection. As such, they may drive standards around data encryption and protection. For example, social security numbers or financial data should pass through these specific Vault APIs for transformation/encryption. In this model, the application team is responsible for consuming the standard, but the platform team is responsible for providing the standard service at scale (the encryption services in this example). To learn a bit about providing this particular service: www.hashicorp.com/products/vault/advanced-data-protection
Very well done and thought-out video!
Very beneficial, as always.
Very, very similar to what happened in a company I worked for - yes, including the first 6 months/phase #1 of "playing with the new toys".
Loved it
You guys spying on us? You're more or less describing how our operation has organized itself over the last 5+ years. But...we have yet to crack the 'Architecture' problem. Agile has poisoned the design phase beyond recognition. Not to argue for waterfall, or any of the other sclerotic enterprisey process monstrosities that strangle innovation, but where are these supposed standards and "templatized" platform product designs supposed to come from? (rhetorical question, natch)
All the shiny innovations of the last decade just assume that piece is getting handled by really really smart folks. News flash: it's not getting handled. It's GIGO, only now with CI/CD, IaC, and even *more* tech debt.
You can't blame agile for messing up your architecture. You can, however, blame a guy named Conway 😊
Conway's law states that organizations build systems that resemble the organisation's communication structure. If your organisation doesn't have established, defined, well followed processes by which business is conducted on a day to day basis, any IT system built to support its ad-hoc irregular processes will be a mess, agile or no agile.
Germany