i have problem, i want use serverless app to deploy model in sagemaker and use endpoint interference for my lambda, but still not work. This is automatically when trigger that lambda. How can i do that ?
“I want to ask why Mojo is two times slower than Python in a for loop. Python completes my for loop in 10 seconds, but Mojo takes 18 seconds. Where is your claimed 63,000 times speed advantage?”
It's likely a bug, and we'd love to fix it! :) Please create an issue here github.com/modularml/mojo/issues. You can read our prior posts here: www.modular.com/blog/mojo-a-journey-to-68-000x-speedup-over-python-part-3
Yeah, you're not wrong. It's a bug that they just fixed with dictionaries. It's about the same speed now or slightly slower than Python. Yes, it's odd, and I think it's good you're bringing it to the attention of Mojo, but be aware that Mojo is still developing - so benchmarks aren't always everything.
i have problem, i want use serverless app to deploy model in sagemaker and use endpoint interference for my lambda, but still not work. This is automatically when trigger that lambda. How can i do that ?
Max is interesting but I would like to use it without paying the 40% sagemaker tax
“I want to ask why Mojo is two times slower than Python in a for loop. Python completes my for loop in 10 seconds, but Mojo takes 18 seconds. Where is your claimed 63,000 times speed advantage?”
It's likely a bug, and we'd love to fix it! :) Please create an issue here github.com/modularml/mojo/issues. You can read our prior posts here: www.modular.com/blog/mojo-a-journey-to-68-000x-speedup-over-python-part-3
Yeah, you're not wrong. It's a bug that they just fixed with dictionaries. It's about the same speed now or slightly slower than Python. Yes, it's odd, and I think it's good you're bringing it to the attention of Mojo, but be aware that Mojo is still developing - so benchmarks aren't always everything.