The Leading Educational Resource for IT Professionals

Practical RPG: Processing Stream Files, Part 2

by Victoria Mack December 07, 2016 0 Comments

In part 1, we processed a directory. In part 2, we process one file in that directory.

Stream files are not database files.

While that statement is obvious to programmers, it's not always clear to the greater community. The end users, the folks whose jobs we are supposed to be supporting, use various forms of stream files to store their data, and they don't understand why we can't for example just "use this spreadsheet" as part of our application. And while that's an interesting philosophical discussion, as programmers we sometimes have to simply get things done, and that in turn means taking whatever data the user sent us. I've spent a lot of time over the years importing data primarily from Exceland more specifically from comma-delimited files. Two techniques exist: CPYFRMIMPF and parsing the data in RPG. CPYFRMIMPF is a completely different animal that perhaps can be covered another day. Today, I just want to talk about parsing a stream file.

Why DIY?

Why do it yourself? The primary benefit is that you have complete control over the processing. Though convenient, CPYFRMIMPF limits you. As an example, you can't use CPYFRMIMPF if your stream file has varying formats (such as header and detail lines). Or let's say you want to process the file differently based on something in one of the records; with CPYFRMMIPF, you would have to copy all the records to a temporary file and then process that file. That's not necessarily a bad thing; I do exactly that in many of the import subsystems I've written over the years. But let's say you want to use only a few records in a large import file. In that case, you'd have to extract all the records into the temporary file even though you're only using a few.

 

So today's example is going to focus on reading the stream file yourself. One note: Since this is defined to be a comma-delimited file, I can treat it as a standard human-readable text file. Under other circumstances, you might receive binary data, which requires a completely different access technique. There are actually routines that allow you to directly access an Excel file, as an example. They're in Java and well outside the scope of this article, but I thought you should know.

 

Processing a Stream File

Stream file contents are accessed using APIs similar to the ones I introduced in the previous article on processing directories. Let's take a look at the prototypes for the APIs I'll be calling in this program.

 

      dcl-pr IFS_fopen pointer extproc('_C_IFS_fopen');

       file pointer value options(*string);

       mode pointer value options(*string);

      end-pr;

 

      dcl-pr IFS_fclose extproc('_C_IFS_fclose');

       fd pointer value;

      end-pr;

 

      dcl-pr IFS_fgets pointer extproc('_C_IFS_fgets');

       pbuf pointer value;

       size uns(10) value;

       fd pointer value;

      end-pr;

They're quite straightforward: open, close, and get. The open and close are very simple. IFS_open takes a file name and an access mode (we'll be using read mode) and returns a file handle. Pass that handle to IFS_close and the file is closed. Of course, opening and closing a file doesn't do you much good without at least one intervening read; that's the job of the get routine IFS_fgets. I also define a few variables.

 

      dcl-s fp pointer;

      dcl-s sp pointer;

      dcl-s buf varchar(2000);

I use two pointers and a buffer. Note that the buffer is type varchar. I do that so that I don't have to constantly keep track of the length of the data that I've read in, but it does make for a rather strange bit of code later on. But let's start walking through the logic.


       // Open stream file

     fp = IFS_fopen( wFile: 'r');

     if fp = *null;

       SetError('01':%str(strerror(errno)));

       return;

     endif;

This is very simple code. The variable wFile contains a fully qualified file name, and IFS_fopen attempts to open it. The mode is "r", which indicates read-only open. If the file doesn't exist, you'll get an error. The SetError routine is just a placeholder to show you how you might signal an error to the caller. How you handle this sort of fatal error depends very much upon the application. You may also notice the use of %str. When you're using the C APIs, you tend to use that a lot. It converts between traditional RPG character variables and the null-terminated strings that C uses. In this case, the routines strerror and errno combine to return a human-readable error message, but that message is null-terminated. We need %str to use it in our RPG program. Assuming, though, that the open is successful, the variable fp (which has been defined as type pointer) will contain a handle which you can then use for any of the other file-based APIs. And we do that immediately by trying to get the first line.

 

     // Read in the first line

     sp = IFS_fgets(%addr(buf:*data): %size(buf): fp);

     if sp = *null;

       SetError('02':%str(strerror(errno)));

       return;

     endif;

     buf = %str(sp);

As I said, the IFS_fgets function isn't exactly intuitive, especially to we RPG programmers who haven't traditionally done a lot of pointer-based programming. I have to pass IFS_fgets the address of the buffer and the size of the buffer, along with the file handle that we got from the IFS_fopen API in the previous section of code. If the routine is successful, it returns the same pointer I passed in. If it's not successful, it returns null (that's the test that you see immediately following the API call). But the really confusing part is the next line, where I convert the null-terminated value returned from IFS_fgets into a regular RPG string, which I then place in the variable buf. But you might be asking yourself, why do that? Isn't the data already in buf? Well, yes, it is, but buf is type varchar, and the IFS_gets didn't set the length. Now, you might think I could just set the length of the variable, perhaps by scanning the string for the terminating null:

     %len(buf)= %scan(x'00':buf) - 1;

That looks good, but I found something out through some painful trial and error. It turns out that the compiler really, really wants to be helpful, and if you set the length of a variable, it will fill the variable with blanks. So all that nice data that you just read into the buffer is overwritten with blanks, which certainly defeats the purpose. So I don't do that; instead, I use the previous line, in which I basically overwrite the buffer with itself. It's counter-intuitive, but it works perfectly. So now for the rest of the code.

     // Parse the buffer and process it

     Parse(buf);

 

     // Close the file

     IFS_fclose(fp);

       return;

And that's all it takes. Now obviously this particular code only processes the first line of the file. That might be enough, especially in an inbound router. The router may read the file, use some logic to identify the transaction type, and then send it on to another program. But this is exactly the scenario where you might need your own parser rather than a generic CPYFRMIMPF.

 

Where to Go from Here

So clearly this is a very simple application. All of the work will be done in the Parse routine, which I don't even show here. If I get a call for it, I can probably do another article on parsing, but for now this should give you enough of a start on IFS processing to give you some ideas of where it could be helpful in your environment. I've had great success using this to allow users to upload their own data to the system (after running it through validation, of course). Use this basic design to have some fun with the IFS and see what you can come up with.

 

About the Author: Joe Pluta   




Victoria Mack
Victoria Mack

Author



Also in MC Press Articles

Gifts to Give Yourself This Holiday

by Victoria Mack December 21, 2016 0 Comments

It’s the holiday season, and everyone is in the gift-giving mood. Be sure you don’t forget to invest in your company and career.

brian mayWritten by Brian May

It’s a special time of the year. Family gatherings for the holidays, football season, and time in the woods all make this one of my favorite seasons. The month of December is also unique for IT departments. December is certainly not business as usual for most of us.

It’s time for budgets. That may mean requesting budget items for next year or spending surplus budget before the end of the year. It’s often when project work slows down a bit as end users, and IT staff alike, take time away from the office. It’s a time when stress is often at its lowest, and it’s just easier to get some things done.

Continue Reading →

SQL 101: Tools of the Trade - i Navigator’s Run SQL Scripts

by Victoria Mack December 21, 2016 0 Comments

A more “modern” alternative to STRSQL, discussed in the last two articles, is the i Navigator’s Run SQL Scripts tool. Let’s explore it together, shall we?

rafael victoria preiraWritten by Rafael Victória-Pereira

While STRSQL is a green-screen tool, Run SQL Scripts is part of the i Navigator package. You can access it by choosing the Run SQL Scripts option, either from the bottom-right pane of the i Navigator window after you’ve chosen the Databases tree node from the right panel, as shown in Figure 1, or by right-clicking the database name and choosing the respective option.

Continue Reading →

Energy, Vibe, and Atmosphere

by Victoria Mack December 19, 2016 0 Comments

This is IT. We must be willing to bend.

steve pitcherWritten by Steve Pitcher

With a growing emphasis in talking about the state of the current IBM i workforce, also known as the “IBM i skills shortage,” it behooves oneself to keep the noise level to a minimum in order to make even-keeled decisions. In short, don’t necessarily believe all the hype you read.

I’d like to think of this as an extension piece to “The IBM i Skills Shortage Myth.” It’s not necessarily a “part two” per se, but more of a story that runs parallel. I’ve been trying to write this for about six weeks, but some things are just hard to put into words, especially when they involve how you feel as opposed to what you know. Besides, writing what you know is easy. Writing what you feel leaves room for reader interpretation, so you have to be more careful.

Continue Reading →