Wednesday, September 24, 2008

New web server design for a new web?

Over the last couple months I have been trying hard to understand (grok) this changing web. There has been a lot about web 2.0 and as much as javascript and dynamic html is an important part of this change I think it is more than that. The web is far from fat clients and in some ways that is a positive for applications to not get lost in eye candy. There is a more important aspect I believe to this new web. For now I will call it the hive. Desktop applications even network aware / connected that I have used or seen are not taking advantage of the hive. The combination of intelligent data with dynamic HTML is creating really useful applications. Not just processing the request but also analyzing the data that they provide.

These are the kind of application I would like to build. However I am not sure that the current choice of web servers/frameworks provide. While I am on it let me get on my soap box about issues with current web server / frameworks / development environments.

Fast
It is 2008 I am running a dual core 64 bit laptop with 2 gigs or ram. Render time should be the only performance issue however it seems that we are still struggling with web serving. In a word Appengine is a single process so if you have a bunch of static files are SLOW! There has been a lot of talk about threads and I don't really need to cover this topic other than threads are just more pain then they are worth and I have yet to see them provide any better performance than processes. Finally as I have blogged many times before asynchronous file serving it so much faster and simple to use for serving static files it is crazy not to use it. The new web seems to be lot of little request which means more small requests and it is going to require a lot more static resources. Development time is important every second I have to wait for a page to load adds up and if I am not have a good user experience in development than how to feel confident users will when using the application? Performance matters at the development level and if it is fast on development it should be even easier to make fast in production.

Modular
The state of Python is that there is a lot of code out there and unless you want to invent your own web server in assembly language use and share code. WSGI is here use it (hint Django) there are a lot of great tools already available to make development easier.

Robust
Whenever I develop a web application it is perfect code I never have memory leaks or push up bug fixes that then create unstable situations. NOT! One of my favorite things about Apache is the processes can be configured to die after a certain number of requests. This is great memory uses doesn't just grow forever it can be returned and reused later by another processes. This is key to why I think Apache is rock solid.

Simple
This seems self explanatory and everyone has simple frameworks. The real world of applications has ugly data models and the logic that dictates real world problem is often complex. With all this domain knowledge in my head last thing I need to to wrap my head around the 5 code filters or something that are happing that might be causing a bug.

With all this in my head I have been thinking about a web server experiment that is architected something like this:
HTTPDProc - The asynchronous static file and client managing process, which is responsible for serving static content, receiving data from the client and dispatching that data to the RequestProc (s) and finally sending data to the client from other processes.
RequestProc(s) - This is a process pool that runs the actual Python code and can be configured to grow based on queue size of requests. They can also die if they process a specific number of configured amount or maybe even if they reach some memory limit, cpu time who knows? This provides for supporting the future of multicore computer growth and the WSGI/developer code is not able to bring down the rest of the web server.
WorkerProc(s) - At the moment all code must delay the response of a request or use up CPU of the request if it is done after response has been flushed to the client. The other solution is generally use CRON! Cron is a wonderful tool and should be use however I think there needs to be a third option that will not delay response or compete for request CPU and should not be added to the already 500 line crontab. Cron tasks tend to not scale very well and many times are a large waste of resources either because they use a lot of CPU/IO to figure out they have nothing to do or they never run at the frequency that would be ideal. Lots of frameworks have already giving some aspects of this concept like TurboGears which runs an email thread. Email is a good example of what this working could do, however it could also do logging, statistical analysis, UI notification, indexing and the list goes on. These worker processes could be configured to grow and also have limits to the number of jobs or memory used. Possible configured to use a parallel python type setup which has a cluster of machines handling the jobs.

I hope to have a working prototype next week leveraging the pyprocessing stuff and asynchronous web server that I found an example of. Frisky code base will be converted to use this web server and support WSGI. Any thoughts are welcome!