Perl Unidecode modules - which to use (if not Text::Unidecode)?
Mark Aitchison
mark.aitchison@cyberxpress.co.nz
Mon Apr 5 21:50:14 GMT 2021
A little more detail... I realise that stripping accents off is often not a good thing to do, but at the moment that basically is what I'm after, or to be more specific: I want to know if the character is a consonant or vowel... I basically throw away vowels and punctuation in this odd application. Later I will want to do all sorts of things with input text that might be utf8 or utf16 or some encoding that (hopefully) I can guess and translate to the same standard and ultimately spit out on a web page.
There seem to be many perl modules that do similar things... I want to be able to distribute my code and not require people to download things from cpan. I'd like to stick with modules that are as stock standard as standard can be, i.e. are in a standard cygwin distribution, and are normally found in other perl environments. In a sense, searching cpan gives me too many options because that includes modules that might require a customer to do more than I should ask them to have to do, if it could have been avoided by me choosing a more standard way of achieving the goal in the first place.
What I probably should have asked is...
1. What perl module, that comes with cygwin, is good for telling whether a letter is a consonant?
2. Later on I will also need something that makes a reasonable guess as to what kind of encoding is used in some text (that might not have a helpful header telling me the answer), with the view to converting it to whatever encoding I want? I can find software to do this, but I would like to restrict options to just those a cygwin user can install with the setup program... if I'm not being too unrealistic about that requirement.
Thanks, Mark
On 5 Apr 2021, 22:50, at 22:50, Joel Rees via Cygwin <cygwin@cygwin.com> wrote:
>On Mon, Apr 5, 2021 at 6:26 PM L A Walsh <cygwin@tlinx.org> wrote:
>>
>> On 2021/04/04 14:26, Joel Rees via Cygwin wrote:
>> >
>> >> 1. What perl Unicode modules should I consider, if not
>Text::Unidecode?
>> >> The present need
>> >> is to be able to convert those few "foreign" characters (like
>> >> ÇĆĈĊçĉċĜĞĠĢĝģğġËÌÍÎÏÒÓÔÕ)
>> >> that are basically ASCII with accent marks to their closest ASCII
>> >> equivalents, but I'd
>> >> like to do more with Unicode in the future, without going down any
>> >> dead-ends as far as
>> >> being able to run under cygwin is concerned.
>> >>
>> >>
>> >
>> > "Stripping those few foreign accent characters" is probably not
>really what
>> > you want to do.
>> >
>> ----
>> Why not? You don't know his use case and you are misinterpreting
>his
>> example as random garbage.
>
>Actually, I was specifically _not_ interpreting them as random garbage.
>If they
>were random garbage, it wouldn't matter what he does with them.
>
>> Those aren't a random foreign encoding -- those are C's G's then E, I
>O
>> with accent variations that he may want to collapse for purposes of
>storing
>> in a text storage and retrieval (search) application.
>
>in this world many things are possible, and those may actually be
>intentional
>strings of characters with assorted diacriticals, some sort of example
>of
>diacriticals, and he may have some reason to force the characters to
>their
>base form instead of regenerating the text. Or maybe I'm
>misinterpreting
>his intent. Maybe he doesn't want to strip the diacriticals so much as
>convert
>the combinations to something like punycode.
>
>> They are all well
>> formed/well-coded UTF-8 characters -- they are not some 8-bit
>encoding
>> that was remangled during a no-recoding display of them in a UTF-8
>> context.
>
>I've seen lots of strings like that that are the result of e-mail
>software
>mangling. In Japan, we call it 文字化け (mojibake). And, yes, the e-mail
>software "helpfully" converts the misinterpreted bytes to well-formed
>but entirely irrelevant UTF-8 in many cases.
>
>I will acknowledge that I don't see it as often as I used to, but it
>still happens.
>
>> I didn't know about Text::Unidecode -- but it specifically to create
>> Latinized alternatives to foreign characters. That was another hint
>> that it wasn't a random mistake. The manpage for it says:
>>
>> It often happens that you have non-Roman text data in Unicode,
>> but you
>> can't display it -- usually because you're trying to show it
>to a
>> user
>> via an application that doesn't support Unicode, or because
>the fonts
>> you need aren't accessible. You could represent the Unicode
>> characters
>> as "???????" or "\15BA\15A0\1610...", but that's nearly
>useless
>> to the
>> user who actually wants to read what the text says.
>>
>> An example was like:
>>
>> tperl
>> use utf8;
>> use Text::Unidecode;
>> my $name="\x{5317}\x{4EB0}";
>>
>> printf "name, %s == %s\n", $name, unidecode($name);
>> '
>> name, 北亰 == Bei Jing
>
>I would not call that "stripping" accent marks. It's a process of
>recognizing the
>characters, looking them up in a dictionary, and finding a reasonable
>Latinized
>equivalent, which is a fairly involved process requiring a bit of
>heuristics, since
>there is often a many-to-many mapping involved.
>
>> It's not just about removing accents but getting an English
>> like translation based on the foreign text.
>
>And that's actually what I was trying to point him to?
>
>Okay, maybe my suggestions were too elliptical. Maybe I should have
>told
>myself I was too busy and ignored his question like everybody else.
>
>[snip]
>
>--
>Joel Rees
>
>http://reiisi.blogspot.jp/p/novels-i-am-writing.html
>--
>Problem reports: https://cygwin.com/problems.html
>FAQ: https://cygwin.com/faq/
>Documentation: https://cygwin.com/docs.html
>Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple
More information about the Cygwin
mailing list