This is a discussion about getting Python to merge Internet resources and Python.
- syntax changes? are we interested in that?
- a basic URL type
import hooks for importing from URLs (ex: http://stdlib.python.org/2.3.4/pickle)
NetworkedData - native data, networked across the Internet
native support for ComponentBus architectures - see notes below
- native inclusion of Twisted, or something like Twisted
Importing a URL is a very bad idea. First, there are security issues, as you are executing code over which you don't have control. Also, you don't generally want libraries to be upgraded behind your back. The standard library gets upgraded, but a lot of work goes into keep that backward compatible -- in other libraries, this isn't the case. You want to make upgrades explicit in that case. Languages that have the ability to import URLs -- like PHP -- almost never use it.
A URL literal doesn't seem particularly useful. It would provide an alternative to, say, url('http://something.com/whatever') (e.g., <http://something.com/whatever>), but since URLs tend to be dynamic or configurable, a literal doesn't add significant value. However, a good URL class would be excellent, maybe a class that has an API similar to the [http://www.jorendorff.com/articles/python/path/ path] module. It would be great if both modules were builtins (or at least in the standard library).
Hmm, things seem (or could be) more complex than Ian says. First, we may not mean URLs as much as we mean URIs or URNs. That is, we may want to import the equivalent of logical constants, about which we may or may not be able to learn more info by dereferencing them (in the case of HTTP URLs). Mozart/Oz language does something roughly like this.
Second, we often "execute code over which we don't have control". CPAN lets Perl users do that every day. As does PyPI. Those are manageable issues, whether you do "python setup.py install" or "import <URL>".
Third, my original idea is to start with the stdlib, not arbitrary 3rd party code.
Fourth, I don't know what "URLs tend to be dynamic or configurable" means. Even if true, and I don't think it is, I don't see how that makes a literal of no value.
Rather than be contrary again, I'll pose a question: what are the real benefits of importing from a URL, and from a URL/URI/URN literal? A URI object is of obvious value, but it's easy to imagine it being created from a string. What value is there in a literal? What can you do at the tokenizing step given a literal? What case is there for importing from a URL, when you take versions and security into account? (And C or Pyrex extensions, multiple installations, fast start-up times, unreliable network connections, etc) Like I mentioned, you can do it in PHP, but no one ever does.
Several languages have support for some kind of XML literal, including XEN, XDuce, o:XML, Comega, and others. More later.
Hi, my name is LionKimbro. I wrote a small hack module for NetworkedData. Basically, this means that a native data structure can expand, indefinitely, across the Internet. So if you have a list, for example, and three of the lists are defined elsewhere on the Internet, you can just tell those items to resolve, and they'll resolve in place. A master graph keeps records of how all the sub-graphs and data structures are connected.
It's really cool (if you ask me!), and you can read more about it on NetworkedData.
I would love to see something like this built into Python, Perl, PHP, etc., etc., etc.,. There's no need for us to be writing parsers over and over again, and communicating data in packages, when we can just post it to the Internet, and let the network take it from there.
You can build arbitrary objects by this system, because there is support (not in the implementation yet, but conceptual support) for function call resolution. Of course, you have to allow functions to run- you don't want just anyone creating just any object with any initializer- to dangerous. But you can say, "allow this function to be called..." yadda yadda yadda.
Component Bus Architecture support
What does this have to do with Webizing Python? Because these architectures are sort of just Micro-Internets, and naturally blend between the Internet and the programs space.
With these architectures, which are basically the same as hoards of computers connected by a bus (the Internet), but on a within-the-computer scale, naturally bleed into the Internet. By their very nature, it means you can just as well have a component within the computer, as outside of the computer, and the program would (ignoring pragmatics of bandwidth and stuff like that) run just the same.
Most programs, you'd like people across the world to be able to observe some parts of. This implies an addressible space. Why hook up to an adaptor, when you could plug straight into the event bus (were permissions granted..)?
[http://www.w3.org/2002/Talks/0206-python/all.htm Tim Berner's Lee: Webizing Python]
[http://logicerror.com/webizingPython Aaron Swartz: Webizing Python] (native URIs, remote objects, URIs for objects)
[http://effbot.org/zone/idea-xml-literal.htm Fredrik Lundh: XML literals]
NetworkedData, [http://onebigsoup.wiki.taoriver.net/moin.cgi/nLSDgraphs nLSD graphs] (native data that extends across the network)