Stephen Wolfram's new big thing
Stephen Wolfram, controversial author of A new kind of science, possible crank, and inventor of Wolfram Alpha, which has helped me solve many of my math homeworks, has come up with the claimthe he has invented a new computational paradigm. The initial blog post was quite vague, throwing around many buzzwords such as cloud or natural language, and dubious ideeas such "Algorithm automation" (aren't algorithms automatic already?), "Computable documents" (I presume he means his proprietary format, CDF, but it doesn't make sense. You can use any programming language to generate (compute) any document).
Yesterday, VentureBeat published an interview with him and came off as bigger worshippers of Stephen Wolfram than Gizmodo is of Apple products.
I think that SW presents many things as being new, even if they are quite old. Or at least stretches the truth.
There’s a whole new level of automation and a completely divergent approach to building a programming language, away from the small, agile core with functionality pushed out to libraries and modules and toward a massive holistic thing which treats data and code as one.
There is a reason why programming languages are built with a small core and have much of the functionality in libraries: it's so you are not stuck with the same glove whether you have 3, 4 or 5 fingers. SW might be a genius, but he couldn't have possibly thought of every possible use case, and if a certain functionality is baked into the language, it's probably going to be painful to use something else instead.
Also, treating data and code as one? That's a feature of one of the first programming languages, LISP, which appeared in 1958. S-expressions ftw.
“The knowledge graph is a vastly less ambitious project than what we’ve been doing at Wolfram Alpha,” Wolfram says quickly when I bring it up. “It’s just Wikipedia and other data.”
Google wants to understand objects and things and their relationships so it can give answers, not just results. But Wolfram wants to make the world computable, so that our computers can answer questions like “where is the International Space Station right now.” That requires a level of machine intelligence that knows what the ISS is, that it’s in space, that it is orbiting the Earth, what its speed is, and where in its orbit it is right now.
That’s not static data; that’s a combination of computation with knowledge. WolframAlpha does that today, but that it just the beginning.
Search engines aren’t good at that, Wolfram argues, because they’re too messy. Questions in a search engine have many answers, with varying degrees of applicability and “rightness.” That’s not computable, not clean enough to program or feed into a system.
I don't really get why understanding objects and the relationships results in static data only. If the relationship between Earth and ISS is given by an equation of elipse (or whatever its orbit is), parametrized by time, it's pretty easy to answer the query "Where is ISS right now?". Google can already solve equations, plot 2D graphs, so it's not exactly a new thing for it to solve equations between objects it already knows about.
there are 11,000 pages of documentation already.
It hurts to even think about that much documentation. And to think somebody wrote that... *shudder*
“In general, what we’re trying to do is so that as long as a person can describe what they want, our goal is to get that done. A human defines what the goal should be, and a computer does its best to figure out what that means, and does its best to do it,” Wolfram says.
Isn't that what Prolog and other logic programming languages were promising to do? You just say what you want, and the compiler/runtime figures out how to best do it. And look where they are now. Getting half a semester of study in college.
In about 30 seconds, Wolfram created a small web application that drew circles on a web page and included a user interface so a visitor could make them bigger or smaller, or change their colors. That’s doable simply because the Wolfram language — with its access to a vast reservoir of knowledge — knows what a circle is and can make it, and it automatically provides web-native user controls to manipulate it. It was a trivial example, but in another 30 seconds, Wolfram built a code snippet that defined the countries in South America and displayed their flags. Then he called up a map of Europe and highlighted Germany and France in different colors computationally, in seconds.
The first one would take about .... 5 minutes using d3.js? The second one is probably doable in another 5 minutes, out of which 3 will be searching on Google for an API that returns countries and flags. I'm pretty sure searching the documentation of Wolfram will take longer in the beginning (or we will use Google to search through it ;;) )
In other words, “South America” is not a variable to be assigned, or an object or class to be instantiated. It’s a phrase that is known and understood, which significance and meaning and connections that can be pulled into your program with very little effort, and no external data sources. And, that knowledge source is continually updating and growing to match the updating and changing world.
No external data source? But it is continually updating? How? Telepathically? It is external to the application. It will be hosted on Wolfram's servers.
“It will spawn a whole mass of new startups,” Wolfram told me. “Now it becomes realistic for someone to build out a complete algorithm and automation system in a few hours.”
It also changes who can program, because instead of programs being tens of thousands of lines of code, they’re 20 or 200. And that means kids can code or novice programmers can get started — and build significant apps.
Really? You can whip up a new fancy algorithm but you can't code? I don't think you can learn algorithms just on paper, using pseudocode. It's quite worthless if you don't get to write your own code. There are always small catches which are not obvious on paper, such as constants that for inputs smaller than the asymptotical infinity make it more inefficient than other algorithms, etc.
Oh, and what, kids can't learn to code? You don't have to teach them pointers right away, but Python is a pretty simple language to learn - and it doesn't shoehorn any one paradigm on you.
it’s so different from a traditional neat separation of data and code and interface.
Traditional neat separation? Have you ever heard about spaghetti code? There is a reason why it's considered best practice to separate data from code (logic) and interface: usually you can do more than one thing with the same data, you can show the same thing in more than one way, or you can show different things in the same way.
All code, however, can simply be copied and pasted between cloud and device and desktop — it’s all the same.
Really? You're encouraging people to copy paste code? What happened to reusability? Don't Repeat Yourself?
“Another way to do it is to use a function call from a native language like Java,” Wolfram told me. “You’ve got variables in your Java code, and we synthesize the code you need to go from Java to the form that you need it in to send it into our engine, and then sent results back to Java. It will look as if you’re just calling Java, but it will be reaching out to our cloud.”
And that is new in what way? That's the definition of remote procedure calls, which have been around since the 1980's.
So, there is a lot of hype and exaggeration in what we have heard so far about this new project of Stephen Wolfram. I'm curious what will it actually do - and whether anyone will actually use it.