Perl Unidecode modules - which to use (if not Text::Unidecode)?

Joel Rees joel.rees@gmail.com
Mon Apr 5 10:49:39 GMT 2021


On Mon, Apr 5, 2021 at 6:26 PM L A Walsh <cygwin@tlinx.org> wrote:
>
> On 2021/04/04 14:26, Joel Rees via Cygwin wrote:
> >
> >> 1. What perl Unicode modules should I consider, if not Text::Unidecode?
> >> The present need
> >> is to be able to convert those few "foreign" characters (like
> >> ÇĆĈĊçĉċĜĞĠĢĝģğġËÌÍÎÏÒÓÔÕ)
> >> that are basically ASCII with accent marks to their closest ASCII
> >> equivalents, but I'd
> >> like to do more with Unicode in the future, without going down any
> >> dead-ends as far as
> >> being able to run under cygwin is concerned.
> >>
> >>
> >
> > "Stripping those few foreign accent characters" is probably not really what
> > you want to do.
> >
> ----
>     Why not?  You don't know his use case and you are misinterpreting his
> example as random garbage.

Actually, I was specifically _not_ interpreting them as random garbage. If they
were random garbage, it wouldn't matter what he does with them.

> Those aren't a random foreign encoding -- those are C's G's then E, I O
> with accent variations that he may want to collapse for purposes of storing
> in a text storage and retrieval (search) application.

in this world many things are possible, and those may actually be intentional
strings of characters with assorted diacriticals, some sort of example of
diacriticals, and he may have some reason to force the characters to their
base form instead of regenerating the text. Or maybe I'm misinterpreting
his intent. Maybe he doesn't want to strip the diacriticals so much as convert
the combinations to something like punycode.

> They are all well
> formed/well-coded UTF-8 characters -- they are not some 8-bit encoding
> that was remangled during a no-recoding display of them in a UTF-8
> context.

I've seen lots of strings like that that are the result of e-mail software
mangling. In Japan, we call it 文字化け (mojibake). And, yes, the e-mail
software "helpfully" converts the misinterpreted bytes to well-formed
but entirely irrelevant UTF-8 in many cases.

I will acknowledge that I don't see it as often as I used to, but it
still happens.

> I didn't know about Text::Unidecode -- but it specifically to create
> Latinized alternatives to foreign characters.  That was another hint
> that it wasn't a random mistake.  The manpage for it says:
>
>        It often happens that you have non-Roman text data in Unicode,
> but you
>        can't display it -- usually because you're trying to show it to a
> user
>        via an application that doesn't support Unicode, or because the fonts
>        you need aren't accessible.  You could represent the Unicode
> characters
>        as "???????" or "\15BA\15A0\1610...", but that's nearly useless
> to the
>        user who actually wants to read what the text says.
>
> An example was like:
>
> tperl
> use utf8;
> use Text::Unidecode;
> my $name="\x{5317}\x{4EB0}";
>
> printf "name, %s == %s\n", $name, unidecode($name);
> '
> name, 北亰 == Bei Jing

I would not call that "stripping" accent marks. It's a process of
recognizing the
characters, looking them up in a dictionary, and finding a reasonable Latinized
equivalent, which is a fairly involved process requiring a bit of
heuristics, since
there is often a many-to-many mapping involved.

> It's not just about removing accents but getting an English
> like translation based on the foreign text.

And that's actually what I was trying to point him to?

Okay, maybe my suggestions were too elliptical. Maybe I should have told
myself I was too busy and ignored his question like everybody else.

[snip]

-- 
Joel Rees

http://reiisi.blogspot.jp/p/novels-i-am-writing.html


More information about the Cygwin mailing list