There's a lot of hype circulating around AI, and OpenAI in particular, but I see this as a really useful addition to Portainer. I appreciate how cautious you're being on the data protection side, as well as the whole experimental approach to adopting this. Nice work!
Thanks for the feedback.. expect to see some incremental changes in Portainer 2.19 too... this has the potential to be an awesome feature, just need to be careful until GPT4 API is GA..
It's OK I guess, but I have to pay for OpenAI now. If I do, I can construct the prompt myself without portainer. All it gives is a nice shortcut so not sure what we are saving here.
Great stuff .. for those new to all of this .. having instructions to guide you through the process without having to piece bits together from extensive documentation is a godsend .. again switching Models would be useful as well as the ability to hook in your own ..
I added my key but any return is always that I have hit my limits and I see I haven't through the OpenAI dashboard. I definitely have credits and tokens left.
This is what annoys me too. They only say this in the video. I believe this is the strategy to get more people on BE. More and more features being locked behind a paywall.
Portainer sends information about the environment, type, ver, scale, and forms a well worded prompt to ensure the response provided is as accurate and instantly usable as possible. If you know how to write great prompts, then of course no difference to you writing one yourself. There is the button to “deploy with Portainer” that auto deploys the suggested code, but thats a convenience thing.
An error occurred: error, status code: 429, message: You exceeded your current quota, please check your plan and billing details. with 2 fresh API Keys in a row, on a billed OpenAI Account.... works not so well
Completely useless. First, it takes a minute to get a response when an FAQ is instant. Deploying the stack would be possible from the FAQ too. The ONLY thing that AI added here is that you nololnger have any way of knowing that the response you got is actually accurate. What you, and many like you, don't sem to realise is that AI is not intelligent, it's literally guessing based on what it has been shown. it *CANNOT* come up with a config for a version of wordpress that it has nogt been tought about. AI is specifically NOT what you want in cases like this and unfoirtunately it's going to take somebody's gianty crash and significant fkinancial losses for people likie you to understand that.
"An error occurred: error, status code: 429, message: You exceeded your current quota, please check your plan and billing details." To bad it doesnt work for me.
This occurs when you dont have an active subscription to the OpenAI API, either a trial that expired, or you have not subscribed to their PAYG plan. Unfortunately, its not free from OpenAI
Wow! and it even works without having a Business License, all you have to do is bypass the Business license and go straight to Open AI and get exactly the same answer as if you have a Business license.
So you are encouraging *business* clients to start populating their servers with code that is created by an AI that is literally just guessing what to do? How is waiting a minute to get a *STANDARD* answer any better than just having an editor prepare a template and making that available literally at the click of the button? How is waiting a minute for an *unreliable* answer better? And don't tell me the AI got the answer from a text that an editor prepared because that would be insane.
The idea behind the experiment was to gauge interest for a ChatGPT assistant that helps developers spin up relatively "standard" container stacks, vs spending considerable time learning up front... no we would never recommend this be used in production, in fact it should never be used outside of a development environment. And yes, you could go to ChatGPT directly, and write the prompt correctly, but we pre-populate the prompt with your environment config (Docker/Kube, # Nodes, Version) so to get a response that has a higher likelihood of working vs not. Got to be honest though, don't see us continuing with the feature, not until AI has some sort of recommendations validator in it (to stop it suggesting things that simply don't work).
There's a lot of hype circulating around AI, and OpenAI in particular, but I see this as a really useful addition to Portainer. I appreciate how cautious you're being on the data protection side, as well as the whole experimental approach to adopting this. Nice work!
Thanks for the feedback.. expect to see some incremental changes in Portainer 2.19 too... this has the potential to be an awesome feature, just need to be careful until GPT4 API is GA..
It's OK I guess, but I have to pay for OpenAI now. If I do, I can construct the prompt myself without portainer. All it gives is a nice shortcut so not sure what we are saving here.
Great stuff .. for those new to all of this .. having instructions to guide you through the process without having to piece bits together from extensive documentation is a godsend .. again switching Models would be useful as well as the ability to hook in your own ..
I added my key but any return is always that I have hit my limits and I see I haven't through the OpenAI dashboard. I definitely have credits and tokens left.
Ok, Now it seems to be thinking about my question but not responding. Will try it again later. Maybe OpenAI is just having issues or something.
If you have gpt4 api access, will this use the gpt4 model instead?
No as its hard coded, but next ver of Portainer should…
This is not in CE, so don't say so in the release message in CE...
This is what annoys me too. They only say this in the video. I believe this is the strategy to get more people on BE. More and more features being locked behind a paywall.
think they display it there because you can get BE for free
@@bjarnematz yes was 5 nodes and now 3, maybe next year 1 node lol
Whats the difference between asking chatpg directly and using this feature on portainer besides it knowing what environment ist ?
Portainer sends information about the environment, type, ver, scale, and forms a well worded prompt to ensure the response provided is as accurate and instantly usable as possible. If you know how to write great prompts, then of course no difference to you writing one yourself. There is the button to “deploy with Portainer” that auto deploys the suggested code, but thats a convenience thing.
wait i thought this was a rickroll
An error occurred: error, status code: 429, message: You exceeded your current quota, please check your plan and billing details.
with 2 fresh API Keys in a row, on a billed OpenAI Account.... works not so well
See the other comment on this video with this exact issue. Its due to your OpenAI trial expiring or you exceeding your PAYG limit.
@@neilcresswell6539but I have a paid account shouldn’t I get more requests?
This is awsome, however i would recomend that you add something so you can use a different AI, like Bard or a Self Hosted AI.
Yup, thats on our radar, once we saw general market feedback for this feature
absolutely useful forget the naysayers below, just takes the pain out of spinning up a quick stack, many thanks portainer team :D
Completely useless. First, it takes a minute to get a response when an FAQ is instant. Deploying the stack would be possible from the FAQ too. The ONLY thing that AI added here is that you nololnger have any way of knowing that the response you got is actually accurate. What you, and many like you, don't sem to realise is that AI is not intelligent, it's literally guessing based on what it has been shown. it *CANNOT* come up with a config for a version of wordpress that it has nogt been tought about.
AI is specifically NOT what you want in cases like this and unfoirtunately it's going to take somebody's gianty crash and significant fkinancial losses for people likie you to understand that.
"An error occurred: error, status code: 429, message: You exceeded your current quota, please check your plan and billing details." To bad it doesnt work for me.
This occurs when you dont have an active subscription to the OpenAI API, either a trial that expired, or you have not subscribed to their PAYG plan. Unfortunately, its not free from OpenAI
@@neilcresswell6539 Thank you.
Wow! and it even works without having a Business License, all you have to do is bypass the Business license and go straight to Open AI and get exactly the same answer as if you have a Business license.
its a feature of convenience... so of course you can go direct to source if you prefer...
HOLLLY WOOWW THIS IS GREAT!!! NICE MOVE GUYS
Thanks... watch this space for ongoing enhancements..
Just improve documentation.
I would like to jump to business but its not possible on my synology nas…
why is that?
@@dazza2152 Because of „image portainer/portainer-ee:2.18.4 not found on registry“
@@matthiashoffmann6555 thank you for confirming, have you had issues storing files when creating a stack?
So all you take from there is a 50% chance of getting it right template?
Come on. Lame!
I agree however I really appreciated his honesty. And also 3:23 made me chuckle, twice!
Portainer jumping on the AI bandwagon, implementing a feature no one needs.
So you are encouraging *business* clients to start populating their servers with code that is created by an AI that is literally just guessing what to do?
How is waiting a minute to get a *STANDARD* answer any better than just having an editor prepare a template and making that available literally at the click of the button? How is waiting a minute for an *unreliable* answer better? And don't tell me the AI got the answer from a text that an editor prepared because that would be insane.
The idea behind the experiment was to gauge interest for a ChatGPT assistant that helps developers spin up relatively "standard" container stacks, vs spending considerable time learning up front... no we would never recommend this be used in production, in fact it should never be used outside of a development environment. And yes, you could go to ChatGPT directly, and write the prompt correctly, but we pre-populate the prompt with your environment config (Docker/Kube, # Nodes, Version) so to get a response that has a higher likelihood of working vs not. Got to be honest though, don't see us continuing with the feature, not until AI has some sort of recommendations validator in it (to stop it suggesting things that simply don't work).