Digital Equipment User Society presents this recording from the 1990 Fall Symposium, December 10th through 14th, in Las Vegas, Nevada. The following is for informational purposes only and is subject to change without notice. Neither DECUS nor its authors assume any responsibilities for the material, its use, or its applications. By introducing software translations on what we do and why we feel we're qualified to speak here, for the last three and a half years, software translation has been involved in moving VMS software off of the banks and VMS onto Unix. Now we've decided to approach this by writing effectively all the LIB and SIS and SMG and CLI dollar commands and system calls that you guys are so used to using under VMS and implementing them under the Unix operating system. Come on, there we go. Right, now the original slideshow was really talking about Unix and why it's great and all these type of things. So I've decided to change it a little bit. We're going to quickly run through the obvious parts of Unix and then really get down to how do we emulate the SIS and LIB dollar calls under Unix and what are the in-depth internals of doing such a thing and what are the problems. For instance, how do you get RMS code running under Unix without having to recode all your application? And as most of you know, the VAC software written to use RMS, really the structure of it is determined by the RMS system calls. And as Unix doesn't give this to you, the straight translation into using something like CISAM, which is an industry standard package, really doesn't work. And we'll go into the details of this a bit later on. I think this slide's pretty obvious. VMS is a very rich operating system. It does a lot for you. It's a great operating system and everyone I've spoken to doesn't really want to move off of it. So the question really is why the hell does anyone want to go to Unix? And it's a good question. Really, from our point of view, it all comes down to one of cost. In other words, you can pick up a 386 box running at 33 megahertz for the 200 meg disk and you can probably run about, let's say with our software, you could put about 10 to 11 users on that for around four or five, maybe $6,000. Now, I asked you what the equivalent price of a VMS machine is. So ultimately, and I think everyone would agree, it comes down to price. Unix isn't a great operating system. It just happens to be handy and cheap. So what we're looking at here. So the best discussion really is going to be how do we move it across? What are our obstacles? And how do we get over them? I'm sorry if I keep reading these things, but I'm really seeing them for the first time as well. This really just sums up what I said before. VMS, great operating system, plenty of utilities, all that type of stuff. And Unix, and this is all the typical Unix sales stuff you hear so often. The only real points that I personally think, I've been working with Unix for about 15 years, the only really interesting points there are, one is the technology independence. The fact that if you do have software running under Unix, you can pretty much guarantee it's going to be future proof. Anything they do, the chip architectures, and you've often seen the 286s go to 386s to 486s. They're talking about the 586 now. The processor architecture is becoming incredibly powerful, but the nice thing about Unix means that the operating system stuff stays the same. And really, if you look at this, this is really designed for ISVs in a way. People tend to sell their software under VMS, but theoretically this could also be applied to people that do want to move large amounts of software to Unix. And we're looking at two things here. One, why would we translate emulate? And why would we effectively rewrite our software? The four points for the method that we propose, i.e. you just effectively purchase a bunch of Sys dollar live subroutines and link it in and run it. One is obviously you have one code base, and this seems to be of primary importance to most people. The way we propose is that you only maintain on the VAX, and effectively when you want a Unix offering, you simply move it across overnight, run it through a black box, and then at the end of it, out pops your software running under Unix with no changes whatsoever. Your code is still configured to run with VMS, it just happens now to run under Unix. Your software doesn't know the difference. Quick to market, that's obviously very important. The whole process can be done in a matter of months. The flat learning curve, well, I'm not quite sure I totally agree with those. You do have to get into Unix somewhere along the line. And cost-effective, I'm sure some of the people in the audience who know us may not agree with that. We believe it's very cost-effective. Against, well, yeah, there's the overhead. And there's no doubt that in doing this, whenever you move software onto VAX, you have to emulate the IMS libraries, the SysDollar libraries, the LiveDollars, SOR, SMG, CLI, the whole bunch of them. And in most versions of Unix, all except Unix 5.4, this has to be included as part of the executable. So in other words, you don't have the ability like you do on the VAX to have a runtime linkable library. Unix 5.4 does, but currently that and AIX are the only two that do. Runtime dependencies, well, of course, you would be very dependent on the runtime supplied to you, and then, of course, the obvious question of royalties and licenses. And conversely, what are we looking at here? The rewriting, there's the performance aspect. And again, I wouldn't disagree with anyone that says if you took your code, whether it be written in Basic, C, Fortran, and rewrote around all the VMS operating system libraries, it wouldn't perform better. I suspect that our way of doing things probably degrades the system by about 40%. So in other words, you could probably run around six users on a typical system. If you've rewritten it from scratch, you could probably run around 10. But of course, the timing involved is quite significant in that. The languages that are available under Unix, C, obviously. And if anyone's thinking of moving to Unix, and if you are considering rewriting, C is the only one you should really be thinking about. Very portable, very powerful. Basic, there are no VaxBasics under Unix. I don't know how many of you here are programming in VaxBasic. I know at least one person is. Our approach to moving VaxBasic across was we wrote what we call a transpiler. Effectively, we have another black box that you put in VaxBasic and out squirt C. The transpilation rate or the translation rate is pretty much 100%. What goes in comes out in C and runs exactly the same way under Unix that it would do under VMS. I don't want to get too much into that now, as we are doing other sessions on moving VaxBasic to C, both on VMS and under Unix. There are a number of different COBOLs available that will enable you to move across. Pascal, Fortran. Interestingly enough, there's even a Dibol to C translator, which I've heard is extremely good, that will enable you to get your Dibol code across. And 4GLs, generally you don't have too many problems with. 4GLs are pretty standard. The other one we've marked down in the corner there is DCL, which can cause a lot of problems. There are DCL emulators available for Unix. Some of them are very good and some of them are quite average. But you do have a lot of choice there. You can certainly shop around and find one that suits you. Typically now we're looking at really the whole thing. If we're taking a basic program, and this slide is really up there because it fits into some of our other talks, if we look at a typical VMS application, and we've made it more difficult by assuming it's basic, the one language that will not translate across, under the Vax you have the basic compiler running with the basic runtime system. Of course that's sitting on top of VMS, which is supplying the STR dollar routines, find files, and all manner of other things. And of course in the middle you have Vax. The way we do things in this particular application is that we would take the VaxBasic and turn it into C. We would sit that on top of our VMS emulator for Unix. And of course that then sits on the Unix operating system. The slide you saw first, if you did actually manage to read it, said Unix is great at doing absolutely nothing for you. And that really does sum it up. The good thing about Unix is it gives you very little, but what it does give you is at such a low level that you can implement pretty much anything you want to do on top of it. There isn't, well we'll come to it a bit later on, but there's pretty much nothing on the Vax that cannot be implemented under Unix. The only exceptions that prove difficult or at least inefficient are the synchronous IO operations, QIO mailboxes, that type of thing. If you're not expecting to do asynchronous IO, sorry, asynchronous IO, then you will not have a problem. And again, this really just sums up the two roots. One has the application running under VMS at the top there, and you have two choices. You either rewrite your application and run it under native Unix, or as you can see the other roots, which we call the emulated roots, is the application sitting on top of a VMS emulator running on top of Unix. Assuming that we've got over the problem of what do we do with our languages, now again I skipped over that pretty quickly, because this session really is more about the operating system features. And we have managed to get our code into object code format, or under Unix, as we call .o files. What do we do with it? As you can see from the slides, the way we've arranged things is we link in the SysDollar routines there, and you can see a good example of what we've implemented, mailboxes, QIO, et cetera, et cetera. I'm not saying we've implemented more fully. We've really done the 80-20 of all, and we've implemented most of what people need. And on the other hand, we've implemented the live LBRs, et cetera, et cetera. And really, this really is a schematic of our system, the way it works. It's no real different from VMS, except we have a Unix kernel. We've kind of incorporated the STRDollar routines, RMS, and the SysI routines in the inner level. Logically, that's where we saw they should be placed. And then our other routines, such as SMG, the MTH and OTS stuff, on the outside. Really, now we're going to get more detailed. This is where it's really going to start going to free format. Nothing's really prepared. I'll be getting down there next to that overhead, and we'll go through some of the more difficult functions. In other words, what do you do with your RMS should you not want to go out route? How would you get your RMS running under Unix? I'll explain how we did it and show you some of the problems involved and how to get over them. At the end of this session, you should have a good idea how to take any of your Sys and LiveDollar calls and get them running yourselves under Unix, which I'm not quite sure is a good idea for us, but that's the way we'll proceed. Okay, I'll just go down there now. Feel free to move if you don't have a good view from where you're sitting of the screen. Can everyone hear me? No, you have to hold that up. Can you hear me now? Okay. We'll start with RMS, as this really has turned, really seems to be the really difficult one out of all of them. When we looked at it, we wanted to find a way of making RMS work under Unix and yet still give you the capability of linking in 4GLs, databases, and this type of thing into your applications. In other words, we had to implement it on top of what we hope would be an industry standard package. Now, currently in the Unix world, there is only one industry standard ISAM package, and that would be RMS. Now, currently in the Unix world, there is only one industry standard ISAM package, and that really is called CISAM from a company called Relational Database. It used to be the kernel for Informix, which some of you may have heard of. So we decided to use that to start with, but it became apparent that CISAM in its native form simply would not let you do what you want to do with RMS. The record locking is totally incompatible. A lot of the file modes are totally incompatible. Yet my RFA really doesn't work or isn't really feasible to implement. So we took another package that's freely available on the market called DISAM. Now, this was available from a company in Canada called Byte Designs, and they actually sold the source code of this package for about $600, which made a lot of sense to us. So we got that, and then we had to look at how do we implement RMS on top of DISAM? Now, we had a number of problems. One of them was, of course, was the record formats. Under VMS, you're very used to opening up a file and letting VMS fill in the details for you. Is it fixed length, variable length, stream LF, stream CI, et cetera, et cetera? Unix doesn't let you do this. Unix gives you a flat file system. All you have really is a byte offset into a file and the ability to read a number of bytes. And that effectively goes to locking as well. You have a byte offset, and you can lock a number of bytes. So the problem was, how do we implement even the standard RMS features, sequential files, fixed length files, on top of Unix? Well, we actually decided to create a file header. Now, this file header, as it happens, pretty much emulates the format of the ZABFHC that you're also used to under RMS. We effectively write that to the first block. We pad it out to 512 bytes to do, so that Unix can do at least reasonable disk IO with it rather than having each cross byte boundaries. Of course, in the ZABFHC, we have the type, the record length, all this type of thing. So on top of this now, we started with this, and we then had to look at the different record formats and how do we go about implementing them. Some of the easier ones, probably, are the variable length formats, where, of course, we just implemented. I mean, it's something that's pretty obvious, really. We implemented the size of the field followed by the actual data. For fixed length records, of course, one can get away with just simply writing the record. You know the byte offset. You can actually read it quite easily. Relative files pose more of a problem because, of course, they can be deleted, filled, or empty. So, again, the simplest way around that and the way we used was to write a header byte at the beginning of each relative record. I'm still using a flat file system at this point, and I'm not really talking about index files yet. And we put an FE, for instance, which meant empty, filled, FF deleted, and zero meant empty, and the data followed. That gave us pretty much all you needed with RMS when it came to flat files. We could certainly open a file in any format, read it in the correct format, get all the information that we needed out of it. The real tricky bit came with indexed and keyed files. Unfortunately, most of the packages under UNIX are going to assume a fixed length index sequential size. So, if you want variable length, the only way to do it is to assume a maximum record size and write the count at the beginning of the record. This may not be the most efficient way, but I'm afraid it's really the only way if you're going to remain consistent with other index ISAM packages. That's pretty much what we did. We implemented two types of index files, variables and fixed. For the variable length, we simply wrote a header on the front of the record as to how many bytes were in it and read it back. Now, one of the big problems that we then came across were, funnily enough, RFAs. This actually crept back into the whole philosophy of DecRMS and, effectively, why they did things the way they did. I've always wondered why Dec enabled you to maybe reorganize a file. It didn't seem, in today's world, that it would really be any need to. But when you start getting into RFA processing, you quickly realize that if they use standard node accessing in their index databases, then you do have to keep a primary key hanging around so you don't lose current position. This causes a lot of problems when it comes to Unix. The first thing we had to get around, really, was what do we do with RFAs? Under the CISAM system or DISAM, it wasn't a problem. We could get back the record number. But, and this is the big but, you suddenly lose current position. Many of you are used to doing a get by RFA and a get next or even a get by key. Doing it this way, the actual ISAM package lost its key information when you did a get by record number. So we had to do some modification internally to that to enable us to save in a context block the current key information effectively. Once we did that, things got a little bit easier. Once we did that, it took about a year to get this far. Then we had other minor little problems. I really just want to go into some of the trickier things. Take it as read that most of the things you want to do are simply implemented. Now, I'm not going to assume record locking at this time. We'll go into that later. That's a whole bag of worms which takes a long time to sort out and it really is not compatible. You're going to have to put a lot of thought into record locking. Other minor problems, things like get next and delete. Again, because CISAM or DISAM is using a balance B plus tree to do all its indexing. The get next is no problem, but the delete deletes the current node. It then reshuffles and leaves you without the current record position. The next get next effectively leaves you in the middle of nowhere. That has to be sorted out. You have to rewrite your code or do as we did and get into the ISAM package and effectively do as exactly what DECC do and not really delete the record. Effectively, mark it as deleted and not return the bitmap into the free list. All of a sudden we found ourselves getting pretty much like DECC. When we were swearing at them before and saying why are they so crazy? Why do they have to do reorganizing? Suddenly we find ourselves in that exact same position. It's unfortunate, but it's a way of life. If we want to do 100% emulation of RMS, which is vital, if you consider that we've translated around 2 million lines of basic, some 20 major applications and I guess around half a million lines of C code, something like that. You can't really afford to be different from RMS. Whatever RMS does, we have to do as well. Otherwise we're going to end up recoding every program which is not something we could do feasibly. We now found ourselves in a position where we have to reorganize if you do want this particular feature. We left it as an option. If you don't want to do a get next delete, if you don't need to reorganize or use features that cause us to reorganize, then you can use the balanced noting of the actual ISAM package. I'm sorry I'm not going to write these down, but I'm not really going to be able to really get finished in time and write these at the same time. I'll be happy to go into more detail at the end of this. If anyone does want any of these nodes, we'll be happy to send them to them. We have a trade-off between RMS having to reorganize or something like CISAM DISAM, an industry standard, use it as it is and you get no current position, which has ramifications into a lot of code. Or, take the other option and use DISAM and you have to reorganize, which means, of course, you've got the same painful exercise you do on the deck. That was RMS skipped over really quickly. We're now going to move on to some other points that make it difficult and really quite painful. This one really is ridiculous, but it can cause us the most problems. That's file names. Most of you know that Deck have brought out the RISC-based machines based on the MIPS processors. They're based on BSD 4.3, which gives you 255 character file names. That really isn't a problem. On the other hand, Deck now have brought out the Deck 333, the 486 machines. They're all based on SCO Unix. SCO Unix, 5.3 as it is currently, only enables you to have 14 character file names. Add on to this that when you're using one of these standard CISAM packages, it's going to throw on a .IDX onto the end of it as well. That brings you down to 11. Unfortunately, we experimented with a number of ways of getting around this file translation tables. It really didn't work. The only real way to do it is to rewrite your code, get rid of the large file names, or at least turn them into logical names where they can then be mapped to shorter file names on the SCO systems. I don't know when SCO are going to bring out 5.4 Unix. I hope it's soon. I'll go to transparency now. If we look at typical file name mapping, if we take a typical VAX file name, one of the first problems we come across is the disk. Again, this all may sound pretty trivial, but it does cause real problems when you're trying to get your software working under Unix with a minimum of hassle. The first thing you have to do, effectively, is create yourself a device mapping table. Originally, we started off having this in a file that we used to read in at the beginning of each process. That just turned out to be too inefficient. We traded off and eventually put it into shared memory. Unix does give you shared memory capabilities. It's very, very primitive. Effectively, you give it a key and numeric key and it gives you back an address. Then you start using it. You give it a length and it gives you a segment of the size you request. We created a table like this. This table is reasonably important because it's designed also to handle sys$assign and sys$dassign, which, of course, a lot of you are using. The first thing, obviously, in the table would be the VAX file name. Then the mapping file, then it's going to go to Unix. In this case, we can call it slash disk one. Other information that you'll want in the table, things like device type, the block size, other operations you may want to emulate, things like the process ID. I guess we get more importantly now, we added in the number of links so that you can also do as many assigns as you wish to and as many as deassigns and eventually totally deassign any devices. We're not saying this table here is specifically for disks, it can be for mailboxes, any absolute device on the VAX. We added in as well some of the Fab L dev stuff which made it easy for us to pass back the device types to the programs. Once you have this table, once you have it set up there, this becomes quite easy. DUA0 simply maps to disk one. And I think as most of you can see, the directory path name under VMS is pretty similar to Unix and doesn't present too much of a problem. And of course the file name. That's one way of course. Now don't forget you're going to have to go back the other way as well. When you're using things like lib.findfile, it's no good passing in a VAX specification and getting back a Unix path name. Most of you look for a semicolon and then try to chop off the version numbers so you know you've got to the end of the file name. You're not going to get that back under Unix. Also don't forget there are no version numbers under Unix. So don't forget the converse side of this. When you call lib.findfile, you must also make that system recode back to a VAX file name and effectively stuff a version number on the end. Just to make things easy. We're going to move on again now to the assign and deassign I mentioned before. Obviously when you have a table like this built up in shared memory, it becomes rather easy now to implement sys$assign, sys$deassign. And what I'm trying to get at here is the core of the system, if you are thinking of moving across to Unix with the minimum Vs, is design your data structures correctly. This is the most important part. You often know this time and time again in programming, but it really is essential in this exercise. And also make them shared and don't forget locking of them. Create yourself all the tables you'll need in shared memory and then you can use the actual native Unix locking to lock the bytes in the table to enable you to access the table without getting two processes accessing the same thing at the same time. We then, that gave us assign, deassign. And I'm really going through some of the trickier ones here. The next one really was ENQ and DEQ locking, which was resource locking, which is used by a lot of people. Unix, for some reason, only lets you do things with numbers. It has no concept of alphanumeric locking, alphanumeric resource sharing, anything like that. So again, we're back to data structures and shared memory. Effectively, if you're going to do DEQ locking, ENQ locking, you have to create yourself a bit of shared memory that has the resource name. Obviously, now if you're talking about shared memory, you don't want to start getting into variable length data structures. So make it fixed length, and also of course, make it long enough. And then add to that, obviously, the process ID, the one who locked it, and any other information you want about that. And you can see I'm getting back to what I said before, make sure your data structures are correct. Okay, now we come to some real fun. ASTs. Yeah. Now these were interesting. We'll start off by saying that ASTs are very difficult to do under UNIX. Extremely difficult. And again, it goes back to your data structures. You're not going to do it easily. If you want one process to interrupt another process, and that process to then leap off to an address somewhere, obviously you've got to store the address you want to leap off to. And have some way of signalling between the processes. Now again, we implemented this using shared memory. Enabling one process to see if another process was waiting on an AST. Send the signal, that process would know where to jump to. And it kind of worked reasonably well, until we realised our first big mistake. And that was that we hadn't put any critical region stuff in the RMS code. So there we were, processing nicely through a sequential file. We got an AST. The AST routine then went on and started reading through that same sequential file, and totally axed every single bit of information that the ISAM package had accumulated about where it was at the time. So one thing that's important to remember again with this stuff, under Unix, is don't forget your critical regions. Lock yourself, make sure you have information there that says, I'm in a critical region, I cannot be interrupted. So when you come out, you can then get on and do it. It's a lot easier saying it, than it actually is to do it. Because Unix does not give you any easy way to say, I'm now out of a critical region. Allow any interrupts to come on. You cannot do clock events, you cannot clue any type of interrupts or signals, they're called under Unix. Now under 5.4, again, you can do this. But in what we've done, we've effectively tried to program for the very minimum, which is a standard which we're going to call XPG3. Which really says, this is the kernel part, this is what Unix really is. We're going to ignore the Berkeley enhancements, we're going to ignore the system 5.4 enhancements, we're going to ignore the Xenix enhancements, and we're simply going to program for the bare minimum. And the bare minimum unfortunately says you cannot queue signals. So again, we had to implement this with a background process, which I'm pleased that we eventually got rid of in favour of a better shared memory data structure. It can be done, it's extremely difficult. The other thing that don't forget is that Unix cannot, as I said before, Unix cannot queue up signals or interrupts. And that goes for the timer as well. You can only have one timer interrupt. So if you want to have three processes scheduled for one minute, two minutes, and three minutes, you have to do your own timer queuing. And again, we implemented most of these things ourselves, because we had to. Unix doesn't give you any ability to do this. It's simply going to tell you, you have a clock interrupt. If the last one you told it was clock interrupt in three minutes, that's the one you're going to get. You're going to miss out the first two. So again, what do you do with these type of things? Well, we eventually implemented this time a local shared memory structure, specifically for ASTs to do with set timer and cancel timer. Obviously, one thing you need is the address you're going to jump to. Sorry about my writing. And the time to interrupt, and that's important. Because what you're going to have to do internally is sort this list. When you get an interrupt happening, when you want something happening in two minutes, you've got one happening in one minute, you've got to rethought the list and reschedule the interrupt. So it's important to keep the time to interrupt inside the list. And the last thing which you're going to need, of course, is the event number. That gets passed by value to the set time and by reference to the routine that it calls. And many people use that to distinguish which particular AST they're expecting the event to happen on. This address, really, if you're looking in terms of C, is star address and percent event. Which means, really, go indirect on that address and pass the value of the event flag. That's again, it's going to very quickly for ASTs. And in particular, ASTs are timers which are most what we find. I'm not here going to go into event handling apart from to say that it's extremely difficult. Event handling is one of the most difficult things we had to do. And again, that's a data structure. You have to go into shared memory. And you do need to have this incredibly complex signaling between two processes to make sure that one, your critical regions aren't violated. And number two, if you're in the middle of a critical region, you don't lose the signal. Because then you won't get the event flag happening. On my next slide is actually event flags. I've got a process table down here with 50 things on it so I think I'll ignore that one. Needless to say that some of the things you need, of course, are a bank of event flags inside the shared memory process table. Actually, one thing I do see on here which I will mention at this time is I think really I'm going to say this to give you an idea how difficult it is sometimes to get Unix to do anything for you. For instance, most of you are used to reading a character or saying, do I have a character available in my input buffer? Would it be, do I have a character there, shall I read it? Now, if you're looking at standard Unix you cannot do this. You cannot say, do I have a character? All you can say is, let me do a read on a channel. And if I have a character give it to me. If I don't have a character don't give it to me. Now, if all you want to do is a check, you suddenly find yourself with a character sitting there and you wonder what the hell you're going to do with it. Now, of course, the problem then becomes compounded if you then want to chain off or spawn another job that effectively needs that character if you're typing ahead for instance. So, other things you have to do to get a good emulation of VMS under Unix is, of course, keep a last character buffer in your shared memory process table list and check that before you do any reads. These all sound pretty trivial, but they're the type of things that can take a massive redesign if you don't get it right at the beginning. As we didn't. Now, one of the other things we're going to come to is now the asynchronous I.O. In other words, Q.I.O. without W on the end. It's difficult to do. You've got to remember that Unix has no asynchronous I.O. capabilities. If you want to do it, if you really need to do it, first of all I'd suggest recode it. It's not something that's going to come across easily under Unix. To give an example, the way we implemented for instance a Q.I.O. to get a character is that we duplicate the job in memory, we then organize a sequence of signals via the shared memory process table. The job, which is a direct duplication, is called a fork under Unix. It gives you two jobs exactly the same, both running, same code, same data, everything. The only thing you don't get across are the locks, the shared memory and this type of thing. So you have two processes that both know what they're doing and then one can go off and decide to read the character. It then has to signal back its parent process that it's now got the character. It gets very involved and very nasty. If you can get away with that, it's one area I'd say that's very difficult to do properly. Don't do asynchronous I.O. Unix 5.4 in the future is going to allow you to do asynchronous I.O. If you depend on Unix 5.4, you're going to cut yourself off from BSD, which the Alteryx machines are running on, and you're going to cut yourself off from the SCO boxes as well. They're still running 5.3. Now the last thing which I'll briefly touch on is SMG. A lot of people are using SMG and we had to implement ourselves as well. There is no direct equivalent for SMG under Unix. Again, what we did was these guys up in Canada, Byte Designs, they're amazing. They also have a product called W, which is a Windows package. We bought the source code for that. It was another whole $600. It was really very cheap. It gives you a good base to implement something like SMG on. Again, we put about another 9 months effort into the project to get SMG fully implemented. Again, all these things under Unix. Unix gives you this base, but it doesn't give you the periphery around it which you're so used to under VMS. It is possible. FMS as well is something we're looking at. It is possible to do. There's no doubt. There is nothing under Unix currently. Although a lot of people have chosen to go a slightly different route and that's modify their VAX FMS code to run with many of the packages that are available both under the VAX and under VMS. Sorry, VAX and under Unix. That seems to me to be a particularly good idea. I think really now I've come to the end of the top level technical stuff. I'll be happy to take any questions at this point. It went a little bit faster than I thought it would. I do apologize to the erratic jumping around, but it was prepared in the breakfast hall half an hour ago. Vic Lindsay, VLSystems. In your translation efforts and so forth, what headaches have you encountered with networks? Well, luckily, so far none. We haven't had anyone that's actually wanted to do it. I'm sure we will. Unix does give you good networking capabilities. In particular, NFS. I'm sure that you mean something like DECnet interfacing with RMS? Either interfaces like it or even to go one layer further, file serving onto other machines, common file architecture, so that you can share files, file locks between the two. That gets into a very big mess. I realize that. NFS should theoretically give you most of that. NFS now does enable you to do record locking over multiple Unix machines, which is always the first step. I only covered on briefly, of course, the record locking that Unix gives you isn't sufficient for RMS emulation under Unix. So, it's going to get nasty. I think the only way, if we have to do it, we'll probably implement it using streams, and dedicate a stream server to do record locking and a stream server to do the networking. The streams is a nice, effectively device-independent way under Unix of doing network management. You don't need to know where things are, you just send a message down and it gives it to you back. I imagine if we're going to do it, that's the way we'll do it, but at the moment we haven't had to meet that bridge yet. Thank you for the question. Hi, Charles Capps, Temple University. A couple of questions. One is FDL files. Do you deal with them, or? At the moment, no. We haven't really done anything like that. The only real thing we've... Well, kind of slightly. We have something called Create, which can take an FDL, but it only really works for what customers so far have needed. We can't say we've done a generalized FDL. I guess Create is the main one I'm interested in. Yeah, I mean, you can take some... I wouldn't say it's exhaustive. We tend to implement things as they're needed. If they don't work, we get them working and put them in. The other one is the so-called VAX extensions to Fortran, most of which are in all of the other Fortrans, like while doing stuff, but one of which nobody's touched and it's different in Fortran 90 is records. Have you dealt with that at all, or? Not under Fortran. We did under Basic. Again, things like Fortran, we haven't applied too much effort to at the moment because there are good products out there, but I think your best thing would be to recode it out after you've carefully put it in, of course. Yeah, that's where that does a lot of nice stuff. Well, I guess if you recoded it to see... There was a thing called Fortrix on the market, which is a Fortran to see translator by somebody. They're over in the Boston area. They probably would handle that. I know a lot of their stuff is designed for DEC. Thank you. Rama Kuduru from Bara. Is there something comparable to VMS mailboxes in Unix, and if not, how did you implement mailboxes? That's a good question. I missed those totally. Yeah, we did mailboxes. There is nothing comparable. You only have FIFOs. You can create a node under Unix, which is effectively the ability to throw characters in a queue and then another process can take them out. That's all you have for a base level under Unix. If you want to implement the full mailbox IO of VMS around that, you have to put a lot of code into it. And of course, again, you have to go back to the asynchronous side of things, which we had to do a lot of work on. Let me know when a mailbox has got some information in it, which just about everybody uses or seems to use. The answer is, one, no, Unix does not give you anything as powerful as mailboxes, but it gives you a very low level way of throwing things into a device and reading them out again. You've got to remember, Unix is really character based. You throw characters in, you read characters out. What you do around that is up to you. We implemented the full mailbox spectrum of system calls. But again, it took a long time. But it is as certainly as possible. No one else? Okay, well, thank you for your time, ladies and gentlemen. That concludes this program.