This is the mail archive of the xconq7@sources.redhat.com mailing list for the Xconq project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Changing the Standard Game


> I still have no idea when the 7.5 release is going to be. Up until
> recently, I have restrained myself from taking on any big projects
> (such as UI building) because I didn't want to have 7.5 announced
> while I was in the middle of one, and I certainly did not want to hold
> up a release on account of my work.

Experience in other projects seems to be that the whole business of
branching/freezing/etc doesn't work particularly well, and you are
better off making sure that the mainline is always in good shape.

For example, see http://lwn.net/Articles/95312/

Now, we could use better ways of keeping the mainline sane than we
have now.  My pet solution is automated tests, but since I'm not
writing much xconq code (or tests) these days, that doesn't count for
as much as what the more active developers do.  Another way is to
publish and encourage public testing of big, or risky, changes before
checking them in.  We're low on testers in general, which is the big
risk factor with that one.

As for wanting to keep working on SDL, if you are talking about code
which is specific to SDL, I can't think of any reason to be at all
reluctant to touch it on account of a supposedly-imminent 7.5 release.
If it is non-SDL code, well, maybe then there's more of an issue.  But
the idea of stabilizing the SDL code by not hacking on it doesn't
really make sense, given how much is missing in the current SDL code.

> I am aware that there are some network bugs out there that need to
> be fixed.

Yeah, and I wonder if there is a better way to find these than "find
another human who wants to play a network game and get a few turns in
and get a loss of sync".  AI vs AI network games are the traditional
method, and perhaps the current problem is just that no one has tried
one lately.  Writing some kind of monkey tester might be another way
(which generates random events, hopefully hitting corner cases which
the AI happens not to hit).  Finding a way to run these tests
(whatever they are) more quickly/automatically would be a big step up.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]