This is the mail archive of the cygwin-developers@cygwin.com mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: /cygdrive in CVS? Time for 1.5.6 soon.


Corinna wrote:

On Mon, Dec 01, 2003 at 06:21:51PM +0100, Corinna Vinschen wrote:

On Mon, Dec 01, 2003 at 11:57:48AM -0500, Christopher Faylor wrote:

I'd like to release 1.5.6 soon and, if this is a real
problem, it is not one that we can ignore.

Not within the next hour I hope. I'm currently testing the new fcntl64 stuff to support 64 bit file locking. I'd like to see this in 1.5.6, too.


Ok, I've checked this in.  I found an interesting thing in the testsuite
while testing this.  The fcntl function is called like this:

  flocks.l_type = F_RDLCK | F_WRLCK;
  fcntl(fd, F_SETLK, &flocks);

According to SUSv3 and the Linux man pages, the l_type is either one of
F_RDLCK (shared lock), F_WRLCK (exclusive lock) or F_UNLCK (unlock).

None of these documents describe these values as or'able.  Nevertheless
the testsuite tests fcntl09 and fcntl10 do it like above, which only
works, if one of F_RDLCK or F_WRLCK is 0.  This is a non-portable
assumption.  F_RDLCK is 1 and F_RDLCK is 2 on Cygwin, together that's
3, which is the value for F_UNLCK.  Too bad.

Actually, you might want check this against the original LTP source. If you look at the original, I think it makes more sense (at least to me), since this logic was setting the l_type based on the size of a counter in a for loop. Breifly, this is a portion of the difference as compared to our version 0f fcntl09:


<<<diff -u fcntl09.c.ltp fcntl09.c.cygwin>>>
  for (lc=0; TESTLOOPING
-  int type;
-  for (type = 0, type < 2; type++){

Tst_count=0;

-   flocks.l_type = type ? F_RDLCK : F_WRLCK;
+   flocks.l_type = F_RDLCK : F_WRLCK;

Cheers,
Nicholas


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]