This is the mail archive of the xconq7@sources.redhat.com mailing list for the Xconq project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Two more AI bugs


> Hey!  You're criticizing my code! :-)

The price of quality is continual improvement ;-).

> You could do some pruning, although you'd have to be careful not
> to miss the oddballs.  test-acts.sh for instance will find cases
> where obscure GDL doesn't play nice with particular actions.

Well, yes, but I'd rather test for the oddball cases explicitly rather
than throwing a bunch of GDL at them and figuring the oddball cases
"must" be in there somewhere.  Both because it is the way to make the
tests run faster, and because it is easier to read the tests and see
what is being tested.  And the coverage usually ends up better to boot.

> I've always found the interactions to be the hardest to test - the
> task-level machinery usually works OK in isolation, and it's the AI's
> randomly-chosen bad handling of a task completion that causes
> problems.

But if you were writing unit tests for the AI, you would (I suppose)
feed it *all* possible task completions, so that you can test the AI
without relying so much on the task machinery.  (not that I've planned
out in any detail how all this would apply to xconq, but my personal
experience on other projects is that unit testing makes the
development go smoother).

Now, there still is a need for end to end tests (e.g. must cross the
strait in less than 20 turns), but testing each unit reduces the
amount of work you need to try to do in the end to end tests.

It's all a bit of a moot point since I probably won't be spending
enough time on xconq development to do much of this.  But if I did,
these are the kinds of things I'd think about working on....


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]