Back from ASPIC'2000

François-René Rideau
France Telecom R&D - DTL - ASR
38-40 rue du Général-Leclerc
92794 Issy-les-Moulineaux Cedex 9 FRANCE

Here is my report about ASPIC'2000 that took place on Monday 2000-04-03.

This report is based on the notes I took and my recollections, but it is full of gaps. Although other attendants proof-read it, this account forcibly reflects my own point of view and interests, with some discussions being stressed while others were elided or forgotten.

I made no attempt at preserving the chronological order of the talks, and instead tried to group things together by themes. On the other hand, I also tried to expand and explain a few things that were said but not otherwise discussed since the participants seemed to be in phase. As a result, this report contains both more and less than was said during the symposium.

Intro

Pliant is a computing system based on dynamic compilation infrastructure driven by a reflective language. It took 15 years of efforts for its original author, Hubert Tonneau, before the initial concept and the first usable implementation, as exists today.

The people

ASPIC began around 10 o'clock. There were 12 participants on this first edition. Since all were french-speaking, french was used as the language for the symposium. Since there were so few people, all with converging interests, the atmosphere was very relaxed.

From Heliosam, there were Hubert Tonneau (HT), Loïc Dumas (LD) and Jean-Michel Tonneau. From the CAMS, Patrice Ossona de Mendez (POM), Pierre Rosensthiel and Hubert de Fraysseix. From Croissancenet, there were Jérôme Jubelin (JJ), Gilles Grandon. Were also present Michel de Mendez (MM), Yvan Gaudin (YG), Michel Deza and, of course, from FTR&D DTL ASR, there was yours truly.

Heliosam is a company that does "heliogravure", but has plans to provide documentation and maybe also training around Pliant for individuals or companies that plan to start contributions. HT explained that for Heliosam, Pliant was the way to achieve cutting-edge computerization of its services at affordable price. LD explained that he became a webmaster and programmer at Heliosam using Pliant despite having little previous experience, and how the Pliant .page format allowed him to do now on the server side what CSS promised to bring to HTML browsers, with his creating style files from HTML produced by a WYSIWYG tool (such as WebExpert, that produces code 5 times smaller than FrontPage).

POM explained that the CAMS was planning to use Pliant as the language for reimplementing their graph manipulation tools. A true object implementation of these tools would lead to a huge horror of C++ mess that grew exponentially in complexity with the number of independent graph properties studied and hence, the actual C++ version is losely object oriented and has to check properties at run-time, whereas reflection would allow an efficient true object programmation in Pliant (of complexity linear in terms of the number of independent properties).

JJ explained that Croissancenet is a startup specialized in collaborative workflow applications targeted at commercial departments, consulting, implementation of secured extranets and intranets. JJ has 10 years of experience in the targeted business and wants to use it to adapt today's tools according to it. Their main current tools are Lotus Domino and HTML. They intend to invest in Pliant as a way to federate and integrate together existing services. Its reflective architecture is expected to simplify the work of developers as well as that of service deployers and of end-users: the control it gives should enable to taylor system parametrization so that even little qualified staff can deploy systems.

FTR&D DTL ASR is interested in reflective techniques to build flexible distributed systems, and I am investigating in the context of a PhD on reflective systems.

Interfacing Pliant

Pliant is a platform whose dynamic interface is the web, and is going to stay that way for quite some time, unless someone steps up to code a full-fledged interface.

Netscape, whether on Windows, MacOS, or Linux, is the browser of choice to interface Pliant, and the way currently investigated to enhance the Pliant interfacing experience is by developing a Netscape plugin (someone proposed to write the plugin portably in java).

On the security front, HT has ported to Pliant a few well know algorithms such as RSA and Blowfish, and the overall functionality is currently reduced to that of a proxy for communication between a Web browser and a Pliant server. However, it seems that the way to go for secure browsing is SSL, so this feature might be temporary, until Pliant groks SSL. Another problem with encryption is that it may not be legal to export it from France, so that unless we choose to link to some standard external library (openssl), a full Pliant reimplementation of SSL would have to be written and published from Germany, Canada, or another such free country.

The real plans for the plugin, however, are to enhance the browser interaction with something along the lines of VNC, with rectangles of Pliant interaction being inserted along the passive HTML (rather than doing fullscreen/fullwindow VNC). HT talked about his experience about configuring and fixing remote computer with either X or VNC on 64kbps lines, with VNC being much better. VNC looked to him as the right protocol (or kind of protocol) to use/copy.

Another proposed solutions to enhance interaction with Pliant through web pages was to insert a collection of Javascript hacks, but it seemed that this would be a lot of ugly and complex things with limited power.

The main advantage of web-browser based solutions is that they don't require people to upgrade their client hosts to Linux, when they might need to access services that only have clients under Windows. [Maybe that'll change if WINE or Plex86 is successful enough]

In the long run, it remains to see what capabilities will be included in Gecko/Mozilla, and what capabilities must be added to it with a plugin, so as to adapt Pliant interface development to what is expected to become the common browser. 2001 is when this question will be raised. In any case, HTML won't become indefinitely complicated, be it only because of interoperability problems raised by increasing complexity; the server, not the browser, is the place where to expect the intelligence.

In the longer run, a real solution might be to build a real interface for Pliant, something like Emacs with Pliant inside. Concerns were about dependency upon Xlib and/or communication with the X server being a hell and introducing instability; a solution would be to split them into a separate process, and use a VNC-like solution as with the intended Pliant plugin. However, there seems to be no universal compromise in splitting tasks between client and server, and if tasks must be split, then the browser+plugin model is as good as another one. Suggested systems on top of which to build an X interface, or to inspire from, were Efuns, DrScheme, Oberon (inspiration only).

Miscellaneous: LD explained how dynamic documentation could be integrated with the user interface so that beginners can easily learn as they do and do as they learn. HT suggested that one simple trick to learn programming was that a command line showed the equivalent command to type for every click; even on the web, some Javascript might do it, just like JS can handle Drag&Drop with VNC subwindows in HTML. POM said that 60% of his mistakes in Pliant are due to his mismanaging indentation when code was longer than one or two screenfulls, which brought on the problem of the utility and realizability of an Emacs mode or web tool to manage indentation properly.

Persistence with Pliant: a Tree-based Distributed Data Model

HT explained that he has quite some (bad) experience with the previous infosystem architecture at Heliosam using relational databases. He shared his vision of how he intends to use Pliant in the new generation of this infosystem.

To him, the relational database model works well with a centralized server, but doesn't scale to a distributed architecture. Relational Tables are not meant to be distributed. Now, only distribution allows high-availability for world-wide services.

The model proposed by HT is to organize persistent data into a structural tree, a privileged one among those that cover the graph structure of the data. This tree can be much more easily distributed than tables; a straightforward way to implement it is as a as distributed hierarchical file system of XML files. The infosystem architect would select the granularity at which the system is split into files; typically, each customer, order, etc, would correspond to a file, that describes the attribute of said customer, order, etc. Division into files is made so as to facilitate distribution of service; at each moment, each file is somewhere on the distributed network of server; since customers are localized, having them match a file makes sense (in a typical infosystem). More generally, the datastructure tree is designed so as to facilitate the expected dataflow of the intended services. Such a model allows for localized synchronization between servers with weak and affordable coherence constraints (e.g. file modifications need not be propagated immediately), unlike the relational model, that typically requires expensive strong coherence constraints (ouch about mirroring huge modified global tables instead of mirroring small modified local files).

XML files can also be served as HTML files with hidden tags, so that the very same pages could be served both for human visualization and for consumption by automated scripts. One advantage of the tree data model is that many services need not understand all parts of the tree, as long as they propagate them, whereas in the relational model, services typically need to be aware of all the tables fields because records are a fixed set of fields.

However, Pliant's reflection is most useful here, in allowing automated marshalling and translation between internal data graphs or classical stuctures and external XML/whatever representation. And since this is dynamic compile-time reflection, this translation is both seamless and efficient, unlike what you get with other languages/systems.

XML is by no means a perfect framework, but since it's standard and relatively simple, it looks like the way to go. HT thinks that there shouldn't be unnecessary multiplication of languages. The Pliant programming language for code, XML for data, that seems enough. Lots of domain-specific code and data languages could be devised, but it seems better to HT to embed them inside Pliant and XML, than to reinvent the wheel every time.

A first working prototype by HT is meant to be deployed within three months at Heliosam. Note: there was no claim by HT about this model being original.

Managing Distributed Systems with Pliant

HT and LD told that they managed a park of heterogeneous Debian machines with Pliant, although the code is currently too specific to the setup at Heliosam to be published, but that they are working at documenting it enough to publish it.

Their Pliant engine maintains a database of hardware available on each machine and of software to be deployed there. The Debian system makes it easy to save the state of a machine (list of installed packages, configuration files for each package), so that in the event that anything breaks, an older state can be easily recovered. Pliant, however, cannot go directly through the Debian package installation, since many Debian packages still assume an interactive session with the system administrator; instead, it must edit some Debian installation scripts to replace the interactive questions with unconditional settings, depending on the machine database.

A bug in Perl even prevented dpkg from being called from Pliant, otherwise some packages couldn't install their documentation (!?). According to HT, the Debian installation system is a hell of a collection of perl monster scripts.

To avoid the problems of such forced upgrades/downgrades, HT expressed the wish that the heart of Pliant stay as independent as possible (at least at runtime) from any external software, although some advanced options might depend on such software. Someday, Pliant could be written entirely in Pliant, and be able to dump proper file headers to produce complete standalone executables.

As for the Windows version, MM emitted the wish that it become a toolbox that implement services removed from latest Windows releases (such as the FTP server). [Note from FRR: Together with a Linux kernel, it could even provide a "superuser" mode to remotely manage the installation/update/recovery of windows systems]

There was quite some discussion about how to make Pliant easy to deploy on Windows machines with as little and simple user interaction as possible. Ideally, the automatic installation should decompress the Pliant installation then start a Pliant web server on a predefined or autodetected port, and open a browser with a configuration page served by the server; the server would then configure itself, restart, and put itself in autostart. No one volunteered to implement this wish feature. Some considered jokingly that no windows user would consider Pliant a serious program unless it asked the user to reboot the machine after installation; a possible solution was to sell a enhanced commercial edition apart from the freeware edition, and that the commercial edition would ask for reboot as a "plus" (or was it the contrary?).

HT complained how difficult existing systems made it to upload a file from one system to the other. I asked HT how he intended running systems to detect updated files and to handle synchronization. HT said that the current Pliant http server would cache pages for 15 unconditional seconds (the exact value can be changed using the 'dynamic_page_recheck_delay' option of the HTTP server), but check for an update in the source when a cache entry hit is older than that; however, this doesn't solve the case when modules used by an untouched page have themselves been updated. It would be too expensive for Pliant to track a precise dependency graph for every object, so that the current simple solution is to kill and restart the Pliant server when some meta code changes.

The Pliant Architecture: a System built atop a Code Generator

HT said that Pliant was originally designed as a code generator, not as a reflective system. It just happened that the reflective architecture was achieved naturally as the language and its code generator were developed one with the other.

To HT, dynamic compilation is fundamental. Twisted kludges have been invented to partly compensate the fact that compilation was purely static: DLLs, interpreters, option parsers, etc. But they are basically inflexible kluges, they only replace a tiny part of what dynamic compilation allows, they introduce a huge complexity in the system, and they induce a terrible performance penalty.

For instance, if the Linux kernel had dynamic compilation, there could be only one kernel for everyone, yet it would run in an optimized way on each computer. Instead, Linux users currently have to recompile their kernel to get one that is fully adapted to their hardware, and it is a barrier to entry to the world of Linux. Another example was the CAMS's graph manipulation library, which in C++ would grow exponentially with the number of graph properties studied (in a fully object-oriented programmation), as a new class had to be generated for every combination of properties, whereas with Pliant, user-accessible methods were defined and optimization tactics were enabled depending on the properties verified by the considered graph.

For managing applications as well as within applications, dynamic compilation is useful. So as to properly interact with their environment, programs need some knowledge of themselves (reflection). Code isn't only about being executed; sometimes, it is inspected by other code (for instance, code that will analyze other code to implement marshalling/persistence/etc of its data).

There is undeniably a cost to embed a compiler in a system. But the cost of not embedding the compiler in a system, although hidden, is very large, too: every programmer must then reinvent from scratch limited minicompilers, option and configuration file parsers, etc, to interface each application with its environment. When you add up all the small costs paid by every single application, you end up with a much higher cost than in a system with dynamic compilation. HT reports that in a user space Linux kernel interface he read, the 50 lines of useful code is buried in 500 lines of parsing code that serves to interface the external world.

I suggested that Pliant could make use of external code generators, or code generation libraries (such as ccg), and maybe share efforts with other groups using them. HT replied that the low-level part of code generation (assembly) was fairly easy, although a bit long and very boring (a few hundreds of lines). whereas higher-level parts of code generation were difficult to share in general, and preferably to be written in Pliant, anyway, so that little could be shared with other groups. However, he discussed of the tradeoffs concerning use of external code generators, taking them as black boxes instead of as open-coded collaboration projects. It is already possible to have Pliant dump C code that is piped into GCC and dynamically linked back into the Pliant runtime; this way, Pliant can take advantage of GCC's low-level optimizer; however, as far as low-level optimization goes, even C is limited by its clumsy calling conventions [Note from FRR: GCC accepts alternate parameter-passing conventions with function attribute regparm(n)] [Note from HT: this is what Pliant uses, but that's still not optimal, because you need to carry extra information that cannot be deduced from the mere function prototype].

Anyway, in traditional languages, compilers only optimize low-level aspects; that's also what an external code generator could bring. To HT, where Pliant shines is for its high-level code optimization, that are also the most interesting ones. Examples of simple high-level optimizations that are impossible in static languages include precompilation of dynamic pages into HTML generators (hence, no more runtime HTML generation, no more file copy, but direct memory mapping of HTML buffers), and preallocation of correctly-sized string buffers (hence no more dynamic allocation of a temporary string for each string concatenation operation). These can be done by walking the program at the right moment.

I remarked that the whole interest of a reflective system was to take into account dynamically static invariants. Many people have shown how static compilation allowed to take advantage of static invariants to enable optimizations; dynamic compilation allows to take advantage of static invariants that only appear dynamically.

By opening modules, etc, the programmer chooses which optimization tactics to use or not to use. As opposed to what happens in static systems where a set of optimization tactics comes bundled with the compiler, mostly without choice.

It might be a good idea to look for optimization algorithm in academic literature and implement them in Pliant. One source of inspiration for high-level optimization is Stalin.

Pliant has no garbage collector (uses reference counting), but the standard environment is currently able to free every byte that was allocated when it finishes.

HT said that when a function is declared as dynamically modifiable, it might still be inlined and cross-optimized instead of called through a proxy, but that the system will remember which other functions use it, so as to update them when the shared function is modified.

The Main Difficulty: Robustness

According to HT, there are essentially two difficult tasks in building a computer system: making the operating system robust even when heavily loaded, and making the highly optimizing code generator robust even when stressed with unusual input. The problem with code generator robustness is correct handling of registers and spilling, with arbitrary numbers of variables, and more generaly propagating properties in a way that is correct in all cases, even rare ones. Then, there is the algorithmic difficulty of achieving efficient results, while still preserving correctness, of course. The current Pliant code generator is seemingly and hopefully robust, but algorithmically very crude and doesn't optimize much.

HT said that in a kernel like Linux, a local bug can stop the whole system. In Pliant, the core of the system is the language. HT is committed to maintain a strong coherence within the language, whereas applications needn't the same scrutiny. I said that in Linux, a change in a one optimization technique implied global changes all around the kernel, whereas in Pliant, metaprogramming allows a better partitioning that ensures locality of changes in optimization tactics. HT replied that there was nonetheless an intrinsic complexity in maintaining the core system; but I insisted that metaprogramming precisely allowed to reduce complexity to its intrinsic size, without adding gratuitous complexity on top of it, because of static compilation.

HT was queried about the security model for Pliant, and replied that Pliant inherited the security model of the operating system on top of which it ran (Windows or Linux). Because you cannot fully trust Pliant applications at the moment (although every application can and will do its best to implement whatever access policy is fit for it), it is recommended to run one distinct Pliant server as a corresponding user for each distinct set of access rights that Pliant must respect. I said that this wasn't any better or worse than other infosystems that run atop Linux (including the way web servers manage CGI).

I raised the problem of failure in Pliant processes. HT said failure was the easy problem, since he had a wrapper forever that automatically relaunched a server when dead; a more difficult problem was deadlocks. At Heliosam, he has Pliant clients that attempt to detect deadlocks by periodically connecting to servers; if a server fails to respond, it is killed, and relaunched by the wrapper. His experience at Heliosam taught HT to think a lot about reliability; since Pliant currently cannot achieve the high reliability of the Linux kernel by the same process of massive scrutiny, HT systematically uses conservative fault recovery techniques to keep Pliant servers running.

The Evolution of Pliant: Guiding Principles

HT said that the main hypothesis behind Pliant was that he wouldn't try to make a good language, because however hard he would try, the language would suck. Rather, HT would try to make a good robust code generator that would allow effective code generation for the bad language. Now, the code generator would itself have to do computations; it would require a language of its own, a way to build modules (a classical environment would use make and Makefile's, but HT dislike them), Of course that would be the same language, not to make things more complex. Modules would need a way to interface their environment; instead of building an argument parser in every module, make the system reflective and share metacode between the core and modules. Hence, the reflective architecture came into being naturally, by following the original hypothesis rather than by deliberate choice.

The key to the reflective system is in symbolic rewriting of expressions as semantic graphs (as in LISP) rather than as character strings. But not just rewriting within just one abstraction framework: rewriting from source language down to actual executable code. Pliant metacode is responsible for spitting instructions; recursively calling the Pliant compiler with a rewritten high-level graph is an option (compile_as), and yields LISP macros, but it's not the only option: you can have arbitrarily many intermediate graphs and languages within the same framework. The lower levels of the machine are explicitly accessible.

POM says that ideally, one could view and manipulate expressions as graphical trees. HT replied that the difficulty resided in the fact that rewriting operated on graphs, not trees, and that there was difficulty in representing shared nodes. POM insisted that the CAMS was precisely developing PIGALE, its graph manipulation library, currently in C++, and to be ported to Pliant, with possibly a loss in low-level efficiency, but certainly a huge gain in high-level flexibility. To him, the runtime datastructures must stay as trivial as possible, while the compile-time metastructure must provide the user with sophistication and high-level optimizations. HT said that this was a good idea, but that there would nonetheless be problems in representing graphs as either HTML or bitmap graphics. I said that at worst, you could label shared nodes as LISP does when *print-circle* is t.

HT said that applications were written with a fundamentally different style in Pliant than in other languages. The rewrite system is used to generate a lot of things that were previously done manually. The Pliant language can be specialized into business-specific personalities [domain-specific languages, I'd say]. With such a programming style, users (who are technically proficient in their specific domain, but unproficient at programming) have even come to prefer programming directly in customized Pliant with a text editor to using a mouse-driven GUI. MM remarked the fact that regular users prefer this programming style doesn't preclude the need of a GUI for beginners or occasional users.

Interfacing existing big applications is good, but it doesn't take advantage of Pliant's capability at developing simple and compact code. Instead of trying to integrate too much, HT wishes to demonstrate how to build small and efficient software with Pliant, and attract developers this way.

HT concluded that the main goal for Pliant evolution was not to become a better language, but to develop documentation and applications. He wants Pliant to reject the proprietary software dilemma between unadapted and unadaptable integrated software and superexpensive custom solutions, yet he wants to overcome the barrier to entry of most free software. POM remarked that the success of Pascal was due to Turbo Pascal (and before it, UCSD Pascal, I added), a beginner-friendly environment. HT said that you can't transform overnight a web interface into an interactive editor. Maybe in the future Javascript can, or who knows what Mozilla will bring in 2001. I proposed integration with Emacs, but HT was scared by the bloat. HT said that since Pliant does not aim at being a good language, it strives to stay a small language.

POM talked about his vision of modularity with Pliant. He opposed it to the crude #include mechanism of C. In C, module interfacing through source inclusion means that even a simple printf statement will imply thousands of lines of C code to be recursively included in the source. This implies complexity and slowness in the compilation process; a semi-solution for speed is to precompile header, but it doesn't remove complexity. In Pliant, the module system allows for separate one-time compilation, with the system directly using a module's external part (interface) when compiling other modules that use it, while the internal part (implementation) stays nicely isolated. Up to now, that's classic modularity. What POM envisions is that Pliant would allow to add new code to already opened modules (for instance, defining a macro within a spreadsheet), and have the modification be persistently shared by applications using the module. The persistence constraint means that we must be able to reconstruct the code in a new session, i.e. to rebuild it from source, even though that means tracking enough of a session's input to obtain a valid source for said code (since the meaning of code depends on metacode that has sequential dependencies with other metacode). If such system can be achieved, then the result would be "living applications" that dynamically adapt to the user's customization, without having to ever reboot/restart the system/application so as to get new behavior, yet with persistent such new behavior.

The Pliant Development Model: Free Software

HT told that he has contacts with potential Pliant users who wished to use Pliant to make and distribute binary-only proprietary applications whose source is unavailable. However, Pliant is intrinsically dynamic, and all the current standard services (including http&smtp servers, etc) do make use of dynamic code generation from source. Although theoretically possible, such binary-only software would be costly to make, and would result in very un-Pliant-like things that do not take advantage of Pliant.

I remarked that I had analyzed the phenomenon in a paper last year (before Pliant was even published), and that a reflective would forcibly make source easy to retrieve, for its very adaptative dynamic behavior depends on availability of source. Reflective software makes no sense outside of Free software, and this is why it did not happen before. [Note from HT: That's what I failed to express clearly on the Pliant forum. I'll try to remember your sentence, and the pointer :-) ]

Another big problem with proprietary software is that it enforces an eager partitioning of tasks between software developers. Because each hoarding entity cannot let other read and modify their code, they are forced to define the contours of their activity beforehand, in a non-negotiatable way. Proprietary modules cannot interact based on mutual inspection. Free software to dynamically partition the software in modules according to grown architectural principles as well as to negotiated developer responsibilities.

At dinner, HT recollected the initial difficulty he had to understand free software in general and the GNU GPL in particular. [NB: before ASPIC'2000, HT still didn't make a clear difference between free software and public domain. He has no excuse for it anymore <evil grin>] He told how he had had a hard time grokking the paradox by which it is the very right to unlimitedly copy, modify and redistribute code that prevents splitting of the community into numerous rival code bases! Just because people have the right to do it with free software, they won't. Just because people do not have the right to do it with proprietary software, they will build mutually incompatible code bases from scratch (or from the nearest BSD-free software).

HT also explained that one of the reasons that convinced him to go the free software way had been that many of his e-mails to university researchers had gone without reply, whereas Richard Stallman, who is a busy guy, immediately replied his mail and had a fruitful e-discussion (although concluding that he preferred LISP): the spirit of Free Software creates a community of collaboration that he initially expected to find (but didn't) in the scientific community.

JJ remarks that basing services on GNU GPL'ed software allows for quicker turnaround for bug fixing than with proprietary software controlled by unresponsive providers.

YG has little experience in Pliant, but a lot in software services; he remarks that the industrial success of Pliant will not reside (directly) in the intrinsic features of the language, but rather in the ability of its applications to embrace and extend existing services. HT replied that the Pliant team is aware of this issue, but that there is no easy solution. Rather, interfaces to existing protocols will have to be developed depending on the demand.

HT added that the choice of protocols to implement is not to take lightly, either; for instance, while implementing MS ODBC might allow interface to many legacy databases, it is very complex, poorly performing, and quickly reaches its limitations (although known ones). Thus, choice of such buggy protocol would yield a bad systems, and as a result, the reputation of Pliant would suffer rather than shine. In the case of interface to existing databases, a native interface to MySQL looks like a better idea. Generic interfaces to SQL databases aren't likely to be efficient, since it is important to have unique identifiers for records, that the relational model doesn't guarantee, and that often are not implemented in a robustly correct way when they exist.

YG insisted that even if a protocol was so bad it shouldn't be used, it might be a requirement to provide an interface for it. HT replied that this was no specific technical obstacle with doing it with Pliant, and that anyone could do it if needed. I added that with free software, lack of feature is not the doom of customers, as with proprietary software, but an opportunity for companies to provide development services.

HT said that the main difficulty with developing foreign function interfaces was to deal with the way different compilers pad structures in incompatible ways. Even if technologies like SWIG or CORBA are used, such interfaces are costly to maintain, and each introduce some complexity in the system; they will have to be mostly done by contributors, not by the core team anyway. I added that again, free software means this opens the opportunity of a market for companies selling adaptation, distribution, integration of services, etc. For instance, the core team will implement the MySQL interface, and since the source will be available, someone else can adapt it to other databases. [Pliant is able to reuse C interfaces of other applications, by delegating reading of C headers to GCC.]

HT recalled that the free software model wasn't about one-size-fits-all cash-and-carry software, but about software that can be easily and quickly taylored to the needs of the customer. He recalled how with proprietary software, not only is it impossible to solve bugs, but the support service will often pretend that they aren't bugs and refuse to help, because the marketing service says that there aren't bugs.

YG asked what to do if a free software couldn't provide a useful feature in time. HT replied that the danger of loss of revenue when deploying software wasn't in features you knew weren't there and didn't rely upon; to HT, the danger resided in unexpected bugs, and the ability to quickly identify and solve unexpected bugs was precisely where free software wins big against proprietary software.

MM said that the project needed a director to coordinate developments; HT replied that although his vocation is to start projects, it isn't to develop and maintain them all, and that the role of the core team was to synchronize code and manage release, while development and maintenance of modules can be done by contributors, as well as the integration and marketing of software. I said that once again, free software allowed distribution of human responsibilities to dynamically adapt to the evolution of the project, instead of being cast into a static eager partitioning.

The Challenge of Pliant: Extending the Community

Pliant has been freely available online for a bit more than one year. The last month (since Pliant was present in the Brave GNU World e-magazine) has seen as many downloads of Pliant as the whole preceding year.

Documentation: One goal is to provide users with a gradual learning curve to the system such that they be able to do small modifications on big programs without having to grok the whole program. This means that the system must be split into meaningful modules, and that each module be properly documented. Every brick of the system must be properly documented. POM suggested to clearly split the documentation into several parts, corresponding to the programming language, the beginner frontend, and application-specific documentation. POM warned that with current documentation, many people left the Pliant site with wrong ideas about what Pliant was or wasn't (like "some kind of Python with a native code compiler"). The documentation should make it clear to programmers what are the technical contributions of the language, and clear to managers what are the main ideas behind Pliant. HT said the Pliant site will be split into parts each of which would display an individual status. MM proposed to name a responsible developer for each part, but HT says that he can't and won't force anyone into developing Pliant; MM then proposed coordinators instead, but I said only correspondents could be named, for a coordinator is meant to be active, while a correspondent is only meant to be present. [If this were a commercially funded project, all that would be different]

MM told that when the french national research center CNRS met with the industry, the latter complained that despite the technical quality of the programs developed by the former, they lacked marketable versions; that french industries are laughed at when they sell abroad software that don't even have a decent french version. MM said that internationalization of software was a great concern. HT replied that programmers could bootstrap such documentation, but couldn't maintain it, and that there was a need for contributions by non-programmers; I said that a correct documentation in one language was more urgent than many obsolete documentations in multiple languages. MM insisted that internationalization should be displayed as a definite long term goal. HT wondered how to raise the interest of non-programmers in a computer project. I said that there already existed a project of writers helping with free software documentation.

HT regretted that there weren't more academic interest in Pliant, since its dynamic compilation architecture offered so many opportunities of experimentation for academic research, thanks to Pliant's control over generated code, which allows for automation of experimentation, statistic measures dependent on optimization techniques, dynamic adaptation of code, etc. MM added that academic and industrial people doing physical modelling (e.g. thermic equilibrium when injecting thermoplastic material in a mould) had a use for a tool that would allow them to easily update their models without recompiling everything, yet would choose applicable optimization tactics depending on the properties of the model. More generally, a potential market for Pliant is for dynamic compilation in formal calculus software: Pliant technology can be used to dynamically compile code that will take advantage of the local properties of computations at hand, instead of using a slow generic interpreter for all computations.

I raised the problem of bootstrap concerning academic collaboration, as well as any collaboration: as long as Pliant is not well-known, no one wants to experiment with it, and as long as no one experiments with it, it won't get well-known. POM concluded that this was a marketing problem, parallel to the technical problems of advancing technology.

One way to market Pliant will be to organize next conference in a prestigious place (see next section). Another will be to push acceptance of Pliant within the industry.

MM suggested the opportunity to introduce Pliant in engineering schools such as the École Supérieure des Matériaux du Mans, or the Institut des Sciences de l'Ingénieur en T* et en Matériaux.

HT said that whatever services are chosen, there will always be some arbitrary choices and there will always be discontentment; some will find the system too centralized, others not centralized enough. After 10 years of experience at Heliosam, HT thinks that part of the life cycle of any software is to grow so complex that the best solution is to stop developing it and start again from scratch, whereas some people will want to keep the old software running and thus split the community. I said that this wasn't specific to Pliant.

Announcements

Pliant 34 was announced (and has been released since).

The Pliant forum will move from the EHESS to Heliosam as the latter gets a permanent connection to the Internet. This will allow to move the forum to a standard mailing-list accessible with standard tools, whereas the firewall at EHESS prevented that there be a Pliant-managed mailing-list before.

ASPIC'2001 will hopefully take place on Monday 2001-04-02 at the Carré des Sciences in Paris, within the walls of the former Ecole Polytechnique. We hope that in the meantime, the interest in Pliant will have risen, and that we will be able to raise funds to invite foreign developers.

We wish to thank the CAMS for hosting ASPIC'2000.