Mojo / Modular, has anyone used it in a real project?

Before I go on, I want to say that I really like Chris Lattners.

However, even though Mojo is a superset of Python, I don’t plan to learn it.

The biggest reason I don’t want to learn another language is that I already use LLMs to type all of my code… I don’t think Modular will be able to train a group of developers in Mojo before LLMs know how to write really hard C++ or Rust.

So I don’t see why I should learn another language.

To sum up, I believe Mojo is the best answer for the wrong time.

One could say that one day LLMs will be able to write Mojo too, and that when that happens, it might be a better language because it works with MLIR compilers. My point is that you need a lot of existing code for a language in order to make good LLMs for it, and Mojo isn’t going to have that for the next 5–10 years…

9 Likes

Max is the AI system that Mojo uses for matrix multiplication. It is not open source (the source code is not even available!) and has a very strict license, so no one would ever try to learn it.

Mojo is a language that tries to fit a lot of different language ideas into a Python superset. This makes it hard to use. Like, you can use fn or def to create a function. You can use either struct or class to create a class.

I promise you that Mojo will fail miserably, just like Tensorflow for Swift did.

9 Likes

Well put and I don’t see it being widely adopted given closed source and restrictive licensing.

8 Likes

The first point you made is exactly right, and the project seems to be moving in a similar way. However, I don’t agree with the second point you made. I love the idea of being able to fly high (just like in Python/def direction) and dive deep (kind of like in rust/fn direction) in the same language. In the same world or ecosystem, this would let me try things out quickly and slowly change them into code that works better as I go. I wish there was a language that could do this. It’s not hard to switch between C++ and Python, but it’s also not a smooth process.

6 Likes

This is the only reason I would think about Mojo: I love Lattner. Writing things in Mojo doesn’t seem to be worth the extra work to me.

6 Likes

I disagree with the main point. I still can’t use LLMs to write anything other than very simple things like gitignore files, but they’re great for writing bash scripts. It could be that I have to do LLM work with tools that always seem to be from last week, but they just confuse parameters way too often in languages that aren’t typed well.

I still agree, though, that Mojo will not live. Ten years ago, deep learning needed its own computer language. But the field has never moved slowly enough for a whole language to be made. The field has only been financially successful since around 2013–2015, which is about the same amount of time that Go and Rust have been around. Before 2019, Go had terrible, useless tools.

Huggingface, etc. can now handle all but the most basic use cases with just three lines of code. On the other hand, custom CUDA kernels and group operations are making the cutting edge more complicated than ever.

I really want to be wrong about this because I like Lattner and the idea, but I don’t think it will work out in a useful way. It just missed the window. I believe that soon there will be a turning point, and SOCs will become the next big thing, unless there’s a discovery that makes LLMs much smaller.

4 Likes

A while ago, I offered £1000 on LinkedIn to anyone who could come up with a computer problem that Claude couldn’t answer in three to five questions. Only one guy took the baton, but he gave it back when I asked to see his failed and attempted prompts before I tried my own.

In the end, it looked like he hadn’t tried hard enough with his prompts.

LLMs are used by Nvidia, Microsoft, Google, Tesla, and a few other companies to do the hard work for them in software engineering. Jensen himself talked about this on the most recent earnings call.

Don’t give up on the LLM if it doesn’t give you what you want.

3 Likes

A few times, I’ve tried to use it. The main problem I’m having is that it doesn’t work with all of my Python functions right now.

I could see myself trying to use it more once they get to easy-superset balance though.

2 Likes

I use LLMs to help me write all of my code, and I teach my teams to do the same. Besides that, a lot of the software we make has agentic AI features. I’m an early adopter, and I’m really excited about AI writing code.

Putting all of that aside, it’s worst at systems engineering and coming up with new methods. I’m not holding my breath for this group of LLMs to write such bad rust that Mojo has no reason to exist.

1 Like

I agree that LLMs still have limitations in areas like systems engineering and innovation. While they can be helpful for certain tasks, human expertise is still crucial for complex problems.

The fact that only one person took up the challenge doesn’t necessarily mean that LLMs are limited. It could simply be that the reward wasn’t high enough to motivate more people to try.