[Scribus] bug severity
Marc Sabatella
marc
Sat Nov 10 18:37:28 CET 2007
I've reported a couple of bugs lately, and am wondering if there are
standards regarding the "severity" field. I know everyone always thinks the
bugs that affect *them* are more serious than others might think, and I
don't take it *personally* that one of my reports was downgraded in
severity. But this does suggest that if there are standards for determining
severity, it would be great if they were public and perhaps open for
critique. And if there are not, I think there should be.
The report that prompt my question regards registration marks in 1.3.4.
Yes, I know, it's not a stable release, and bugs won't be fixed. I've
already worked around the bug by creating my own marks using the polygon and
line drawing facilities and defining a registration black color manually.
The problem I reported is that the registration marks inserted on either
side of a document by 1.3.4 using the PDF pre-press option are too high by
about 4.5mm (presumably, this is related to the amount of space added to the
document to accomodate the marks). The result is that I had a document that
looked perfect on screen, and the PDF looked fine if you didn't get out a
ruler and measure to discover the registration marks were off center. But
when my printer sent me my proofs, everything had slid downward by 4.5mm on
the page. Meaning a design that should have been centered on the page
wasn't, objects toward the bottom of the page were pushed into the bleed
area, and the bleed would have failed at the top of the document. The print
job would have been ruined. Again, the problem was caught and fixed in
time, and I do accept that this is development-quality code, not intended
for production use, so I certainly am not writing to vent about the problem
itself. Only to try to understand how the severity of this sort of problem
is supposed to be rated.
When I was in the software business, we had a system of ranking bugs in
which we had reasonably objective criteria that could be applied to allow
most people to agree to a large extent on the severity of any bug. In the
case at hand, the most relevant of the criteria that would have been used is
that the bug causes the program to "silently produce incorrect results".
That is, no error messages or crashes to let the user know unequivocably
that a problem had occurred. Also, the nature of the incorrect results were
such that it is not "obvious" on inspection that the results were not in
fact correct. And yet, an attempt to use the output would - potentially
only after incurring substantial cost - have ultimately proved entirely
unacceptable.
In our rating scheme, this would have been the one of the worst categories
of error, just below most errors that "silently destroy user data". It
would have rated worse than many program crashes, because the latter, while
they do destroy data created since the last save, are not silent - the user
knows he needs to address the problem, and assuming that the problem is
sporadic or that a workaround exists, he is still able to achieve the
desired result. The extent to which a "silently produces incorrect results"
error might be worse than a "non-silently destroys user data" bug would
depend, of course, on just how incorrect the results produced would be in
the former case, how much data would potentially be lost in the latter, and
also the likelihood of either occurring. A bug that silently causes an
extra 0.5mm of extra space to be inserted after every occurrence of the word
"Fred" is obviously not as severe as one that causes an entire page to be
printed off-center by an amount larger than the bleed area. A crash that
occurs only when using some very specific sequence of operations and whose
only effect is to cause you lose the work you did since last save, is not as
severe as a crash that occurs every time you use a feature that modifies a
users' input file if the crash leads to the file being rendered corrupt and
unusable. So of course, there *is* still plenty of room for subjectivity.
But having standards in place really helped. Not that we relied on users to
assess the severity of their own bugs accurately. They would, still think
the bugs that affected *them* were worse than they might actually have been.
But it allowed engineers to reclassify bugs in a way that gave them a
meaningful priority.
So, is there anything like this for Scribus?
--------------
Marc Sabatella
marc at outsideshore.com
More information about the scribus
mailing list