2012年8月28日 星期二

[ Regex 技巧 ] Unicode Regular Expressions

來源自 這裡 
Unicode Regular Expressions : 
Unicode is a character set that aims to define all characters and glyphs from all human languages, living and dead. With more and more software being required to support multiple languages, or even just any language, Unicode has been strongly gaining popularity in recent years. Using different character sets for different languages is simply too cumbersome for programmers and users

Unfortunately, Unicode brings its own requirements and pitfalls when it comes to regular expressions. Of the regex flavors discussed in this tutorial, JavaXML and the .NET framework use Unicode-based regex engines. Perl supports Unicode starting with version 5.6. PCRE can optionally be compiled with Unicode support. Note that PCRE is far less flexible in what it allows for the \p tokens, despite its name "Perl-compatible". The PHP preg functions, which are based on PCRE, support Unicode when the /u option is appended to the regular expression. 

RegexBuddy's regex engine is fully Unicode-based starting with version 2.0.0. RegexBuddy 1.x.x did not support Unicode at all. PowerGREP uses the same Unicode regex engine starting with version 3.0.0. Earlier versions would convert Unicode files to ANSI prior to grepping with an 8-bit (i.e. non-Unicode) regex engine. EditPad Pro supports Unicode starting with version 6.0.0. 

Characters, Code Points and Graphemes or How Unicode Makes a Mess of Things : 
Most people would consider à a single character. Unfortunately, it need not be depending on the meaning of the word "character". 

All Unicode regex engines discussed in this tutorial treat any single Unicode code point as a single character. When this tutorial tells you that the dot matches any single character, this translates into Unicode parlance as "the dot matches any single Unicode code point". In Unicode, à can be encoded as two code points: U+0061 (a)followed by U+0300 (grave accent). In this situation, . applied to à will match a without the accent. ^.$ will fail to match, since the string consists of two code points. ^..$ matches à. 

The Unicode code point U+0300 (grave accent) is a combining mark. Any code point that is not a combining mark can be followed by any number of combining marks. This sequence, like U+0061 U+0300 above, is displayed as a single grapheme on the screen. 

Unfortunately, à can also be encoded with the single Unicode code point U+00E0 (a with grave accent). The reason for this duality is that many historical character sets encode "a with grave accent" as a single character. Unicode's designers thought it would be useful to have a one-on-one mapping with popular legacy character sets, in addition to the Unicode way of separating marks and base letters (which makes arbitrary combinations not supported by legacy character sets possible). 

How to Match a Single Unicode Grapheme : 
Matching a single grapheme, whether it's encoded as a single code point, or as multiple code points using combining marks, is easy in Perl, RegexBuddy and PowerGREP: simply use \X. You can consider \X the Unicode version of the dot in regex engines that use plain ASCII. There is one difference, though: \X always matches line break characters, whereas the dot does not match line break characters unless you enable the dot matches newline matching mode. 

Java and .NET unfortunately do not support \X (yet). Use \P{M}\p{M}*+ or (?>\P{M}\p{M}*) as a reasonably close substitute. To match any number of graphemes, use (?>\P{M}\p{M}*)+ as a substitute for \X+

Matching a Specific Code Point : 
To match a specific Unicode code point, use \uFFFF where FFFF is the hexadecimal number of the code point you want to match. You must always specify 4 hexadecimal digits E.g. \u00E0 matches à, but only when encoded as a single code point U+00E0

In Java, the regex token \uFFFF only matches the specified code point, even when you turned on canonical equivalence. However, the same syntax \uFFFF is also used to insert Unicode characters into literal strings in the Java source code. Pattern.compile("\u00E0") will match both the single-code-point and double-code-point encodings of à, while Pattern.compile("\\u00E0") matches only the single-code-point version. Remember that when writing a regex as a Java string literal, backslashes must be escaped. The former Java code compiles the regex à, while the latter compiles \u00E0. Depending on what you're doing, the difference may be significant. 

Unicode Character Properties : 
In addition to complications, Unicode also brings new possibilities. One is that each Unicode character belongs to a certain category. You can match a single character belonging to a particular category with \p{}. You can match a single character not belonging to a particular category with \P{}

Again, "character" really means "Unicode code point"\p{L} matches a single code point in the category "letter". If your input string is à encoded as U+0061 U+0300, it matches a without the accent. If the input is à encoded as U+00E0, it matches à with the accent. The reason is that both the code points U+0061 (a) and U+00E0 (à) are in the category "letter", while U+0300 is in the category "mark". 

You should now understand why \P{M}\p{M}* is the equivalent of \X\P{M} matches a code point that is not a combining mark, while \p{M}* matches zero or more code points that are combining marks. To match a letter including any diacritics, use \p{L}\p{M}*. This last regex will always match à, regardless of how it is encoded. 

In addition to the standard notation, \p{L}, Java, Perl, PCRE and the JGsoft engine allow you to use the shorthand \pL. The shorthand only works with single-letter Unicode properties. \pLl is not the equivalent of \p{Ll}. It is the equivalent of \p{L}l which matches Al or àl or any Unicode letter followed by a literal l

Perl and the JGsoft engine also support the longhand \p{Letter}. You can find a complete list of all Unicode properties below. You may omit the underscores or use hyphens or spaces instead. 
\p{L} or \p{Letter}: any kind of letter from any language. 
* \p{Ll} or \p{Lowercase_Letter}: a lowercase letter that has an uppercase variant.
* \p{Lu} or \p{Uppercase_Letter}: an uppercase letter that has a lowercase variant.
* \p{Lt} or \p{Titlecase_Letter}: a letter that appears at the start of a word when only the first letter of the word is capitalized.
* \p{L&} or \p{Letter&}: a letter that exists in lowercase and uppercase variants (combination of Ll, Lu and Lt).
* \p{Lm} or \p{Modifier_Letter}: a special character that is used like a letter.
* \p{Lo} or \p{Other_Letter}: a letter or ideograph that does not have lowercase and uppercase variants.

\p{M} or \p{Mark}: a character intended to be combined with another character (e.g. accents, umlauts, enclosing boxes, etc.). 
* \p{Mn} or \p{Non_Spacing_Mark}: a character intended to be combined with another character without taking up extra space (e.g. accents, umlauts, etc.).
* \p{Mc} or \p{Spacing_Combining_Mark}: a character intended to be combined with another character that takes up extra space (vowel signs in many Eastern languages).
* \p{Me} or \p{Enclosing_Mark}: a character that encloses the character is is combined with (circle, square, keycap, etc.).

\p{Z} or \p{Separator}: any kind of whitespace or invisible separator. 
* \p{Zs} or \p{Space_Separator}: a whitespace character that is invisible, but does take up space.
* \p{Zl} or \p{Line_Separator}: line separator character U+2028.
* \p{Zp} or \p{Paragraph_Separator}: paragraph separator character U+2029.

\p{S} or \p{Symbol}: math symbols, currency signs, dingbats, box-drawing characters, etc.. 
* \p{Sm} or \p{Math_Symbol}: any mathematical symbol.
* \p{Sc} or \p{Currency_Symbol}: any currency sign.
* \p{Sk} or \p{Modifier_Symbol}: a combining character (mark) as a full character on its own.
* \p{So} or \p{Other_Symbol}: various symbols that are not math symbols, currency signs, or combining characters.

\p{N} or \p{Number}: any kind of numeric character in any script. 
* \p{Nd} or \p{Decimal_Digit_Number}: a digit zero through nine in any script except ideographic scripts.
* \p{Nl} or \p{Letter_Number}: a number that looks like a letter, such as a Roman numeral.
* \p{No} or \p{Other_Number}: a superscript or subscript digit, or a number that is not a digit 0..9 (excluding numbers from ideographic scripts).

\p{P} or \p{Punctuation}: any kind of punctuation character. <-- 常用 
* \p{Pd} or \p{Dash_Punctuation}: any kind of hyphen or dash.
* \p{Ps} or \p{Open_Punctuation}: any kind of opening bracket.
* \p{Pe} or \p{Close_Punctuation}: any kind of closing bracket.
* \p{Pi} or \p{Initial_Punctuation}: any kind of opening quote.
* \p{Pf} or \p{Final_Punctuation}: any kind of closing quote.
* \p{Pc} or \p{Connector_Punctuation}: a punctuation character such as an underscore that connects words.
* \p{Po} or \p{Other_Punctuation}: any kind of punctuation character that is not a dash, bracket, quote or connector.

- \p{C} or \p{Other}: invisible control characters and unused code points. 
\p{Cc} or \p{Control}: an ASCII 0x00..0x1F or Latin-1 0x80..0x9F control character.
* \p{Cf} or \p{Format}: invisible formatting indicator.
* \p{Co} or \p{Private_Use}: any code point reserved for private use.
* \p{Cs} or \p{Surrogate}: one half of a surrogate pair in UTF-16 encoding.
* \p{Cn} or \p{Unassigned}: any code point to which no character has been assigned.

Unicode Scripts : 
The Unicode standard places each assigned code point (character) into one script. A script is a group of code points used by a particular human writing system. Some scripts like Thai correspond with a single human language. Other scripts like Latin (\p{Latin}) span multiple languages. 

Some languages are composed of multiple scripts. There is no Japanese Unicode script. Instead, Unicode offers the HiraganaKatakanaHan and Latin scripts that Japanese documents are usually composed of. 

A special script is the Common script. This script contains all sorts of characters that are common to a wide range of scripts. It includes all sorts of punctuation, whitespace and miscellaneous symbols. 

All assigned Unicode code points (those matched by \P{Cn}) are part of exactly one Unicode script. All unassigned Unicode code points (those matched by \p{Cn}) are not part of any Unicode script at all. 

Very few regular expression engines support Unicode scripts today. Of all the flavors discussed in this tutorial, only the JGsoft enginePerl and PCRE can match Unicode scripts. 

Unicode Blocks : 
The Unicode standard divides the Unicode character map into different blocks or ranges of code points. Each block is used to define characters of a particular script like "Tibetan" or belonging to a particular group like "Braille Patterns". Most blocks include unassigned code points, reserved for future expansion of the Unicode standard. 

Note that Unicode blocks do not correspond 100% with scripts. An essential difference between blocks and scripts is that a block is a single contiguous range of code points. Scripts consist of characters taken from all over the Unicode character map. Blocks may include unassigned code points (i.e. code points matched by \p{Cn}). Scripts never include unassigned code points. Generally, if you're not sure whether to use a Unicode script or Unicode block, use the script. 

E.g. the Currency block does not include the dollar and yen symbols. Those are found in the Basic_Latin and Latin-1_Supplement blocks instead, for historical reasons, even though both are currency symbols, and the yen symbol is not a Latin character. You should not blindly use any of the blocks based on their names. A tool like RegexBuddycan be very helpful with this. E.g. the Unicode property \p{Sc} or \p{Currency_Symbol} would be a better choice than the Unicode block \p{InCurrency} when trying to find all currency symbols. 

Not all Unicode regex engines use the same syntax to match Unicode blocks. Perl and Java use the \p{InBlock} syntax as listed above. .NET and XML use \p{IsBlock}instead. The JGsoft engine supports both notations. I recommend you use the "In" notation if your regex engine supports it. "In" can only be used for Unicode blocks, while "Is" can also be used for Unicode properties and scripts, depending on the regular expression flavor you're using. By using "In", it's obvious you're matching a block and not a similarly named property or script. 

Do You Need To Worry About Different Encodings? 
While you should always keep in mind the pitfalls created by the different ways in which accented characters can be encoded, you don't always have to worry about them. If you know that your input string and your regex use the same style, then you don't have to worry about it at all. This process is called Unicode normalizationAll programming languages with native Unicode support, such as Java, C# and VB.NET, have library routines for normalizing strings. If you normalize both the subject and regex before attempting the match, there won't be any inconsistencies

If you are using Java, you can pass the CANON_EQ flag as the second parameter to Pattern.compile(). This tells the Java regex engine to consider canonically equivalent characters as identical. E.g. the regex à encoded as U+00E0 will match à encoded as U+0061 U+0300, and vice versa. None of the other regex engines currently support canonical equivalence while matching.

2012年8月27日 星期一

[ MySQL 小學堂 ] Inner/Left/Right/Full join 說明


參考自 這裡
前言 :
這邊對數據庫中容易搞混的 Inner/Left/Right/Full join 使用進行簡單介紹. 首先假設我們有兩個 Table :
- Table test1 :


- Table test2:


接著下面的說明會以這兩個Tables 當作範例.

SQL INNER JOIN Keyword :
使用 INNER JOIN table_name ON condition 確保兩個 join 的表格至少有一個欄位會是相同. 例如我們要找出 'test1' 中的欄位 id 有在 'test2' 中的 pid 欄位出現, 並列出 id, name, phone 與country 欄位訊息, 可以如下操作 :


這樣子出現的集合式在 'test1' 與 test2' 中必須是 test1.id 與 test.pid 都存在且相等, 才會回傳.

SQL LEFT JOIN Keyword :
使用 LEFT JOIN table_name ON condition 將只要左邊的表格有的紀錄都輸出到結果集中. 以前一個範例並將 "INNER JOIN" 換成 "LEFT JOIN" 的結果如下 :


可以發現 id=4 的紀錄在 'test1' 中有但是在 'test2' 中沒有對應的 pid, 因此欄位 country 為 NULL.

SQL RIGHT JOIN Keyword :
使用 RIGHT JOIN table_name ON condition 將只要右邊的表格有的紀錄都輸出到結果集中. 以 INNER JOIN 的範例並將 "INNER JOIN" 換成 "RIGHT JOIN" 並新增欄位 "pid" 的結果如下 :


可以發現這次是右邊 'test2' 的紀錄都有在結果集, 但是在 pid=5 的紀錄應為沒有對應的 'test1' 的紀錄 (pid=id=5), 故 id, name, phone 為 NULL.

SQL FULL JOIN Keyword :
至於 FULL JOIN, 它是 LEFT JOIN 與 RIGHT JOIN 的聯集. 但我使用 MySQL當作示範而它沒有支援 FULL JOIN, 故我改用 UNION 來模擬同樣的結果 :


補充說明 :
w3schools.com : SQL Joins
SQL joins are used to query data from two or more tables, based on a relationship between certain columns in these tables...

w3schools.com : SQL INNER JOIN Keyword
The INNER JOIN keyword return rows when there is at least one match in both tables...

w3schools.com : SQL LEFT JOIN Keyword
The LEFT JOIN keyword returns all rows from the left table (table_name1), even if there are no matches in the right table (table_name2)...

w3schools.com : SQL FULL JOIN Keyword
The FULL JOIN keyword return rows when there is a match in one of the tables...
This message was edited 25 times. Last update was at 27/08/2012 17:43:43

2012年8月23日 星期四

[Linux 常見問題] umount 時 出現 "Device is busy" 的解法


來源自 這裡
前言 :
當任何目錄有 mount, 然後有程式 使用/掛 在那個目錄上的話, 就沒有辦法 umount 掉, 於 umount 時會出現 Device is busy 的訊息. 問題是要怎麼找出是哪個程式掛在那個目錄上? 然後去把那個程式砍掉呢? 可以考慮使用命令 fuser.

實際範例 :
那要怎麼找出是哪個程式掛在那個目錄上, 可以使用 "fuser - identify processes using files or sockets". 這邊我們要模擬這樣的狀況, 首先將 /dev/sdf1 mount 到 /johnext 上面 :
$ sudo mount -t ntfs /dev/sdf1 /johnext # 根據自己的實際狀況進行修改 mount 的命令

接著 change directory 到 mount 點, 執行下面 Python 程式碼模擬 "打死不退的" Process :
- hook.py :[b]
  1. #!/usr/bin/python  
  2. import time  
  3.   
  4. print("Hook and don't exit!")  
  5. while True:  
  6.     print("Sleep 1 sec...")  
  7.     time.sleep(1)  
接著如下執行 :
$ nohup ./host.py &
[1] 26674

接著使用另一個 tty 登入該 Host, 並嘗試 umount /johnext :
$ sudo umount /johnext/
umount: /johnext: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
$ fuser /johnext/ # 檢視在 mount 點有哪些 process 在跑.
/johnext: 26674c
$ ps -aux | grep 26674
nlg 26674 0.0 0.0 10556 3720 pts/1 S 16:35 0:00 /usr/bin/python ./hook.py # 可以知道就是剛剛我們執行的 hook.py

而那個 26674 即是PID, 而 c 的含意可以參考如下 :
c: current directory.
e: executable being run.
f: open file. f is omitted in default display mode.
F: open file for writing. F is omitted in default display mode.
r: root directory.
m: mmap'ed file or shared library.

知道是那些 Processes 造成無法 umount 也確定那些是可以 kill 的 trivial process 後, 可以使用 kill -9 砍到這些 processes :
$ sudo kill -9 26674
$ sudo umount /johnext/ # 此時應該就可以成功 umount
This message was edited 9 times. Last update was at 23/08/2012 16:44:24

[Linux 常見問題] Mount HPFS/NTFS/exFAT (Id=7)

前言 : 
今天帶了一顆 1T 硬碟想要 mount 到我 Ubuntu 上面, 結果: 
1. 第一不知道接上去該外接硬碟後, 應該去 mount 哪顆硬碟,
2. 第二是不知道要使用 mount 的檔案系統類型. ><"

故整理今天處理的步驟當作後面學習的參考. 

檢視外接硬碟的位置 : 
第一步當然要知道 mount 的來源在哪裡, 首先我們知道所有硬碟根據各種介面的磁碟在Linux中的檔案名稱分別為 : 
* /dev/sd[a-p][1-15]:為SCSI, SATA, USB, Flash隨身碟等介面的磁碟檔名;
* /dev/hd[a-d][1-63]:為 IDE 介面的磁碟檔名

這邊我使用的外接硬碟使用 USB, 故在接上我的硬碟前我先來 ls 一下 /dev/ 下面的東東 : 
# ls -hl /dev/sd*
brw-rw---- 1 root disk 8, 0 2012-08-12 01:05 /dev/sda
brw-rw---- 1 root disk 8, 16 2012-08-12 01:05 /dev/sdb
brw-rw---- 1 root disk 8, 32 2012-08-12 01:05 /dev/sdc
brw-rw---- 1 root disk 8, 48 2012-08-12 01:05 /dev/sdd
brw-rw---- 1 root disk 8, 64 2012-08-12 01:05 /dev/sde
brw-rw---- 1 root disk 8, 65 2012-08-12 01:05 /dev/sde1
brw-rw---- 1 root disk 8, 66 2012-08-12 01:05 /dev/sde2
brw-rw---- 1 root disk 8, 69 2012-08-12 01:05 /dev/sde5

接上後, 等個5秒鐘, 再來看看 /dev/ 下面多了什麼 : 
# ls -hl /dev/sd*
...(略)...
brw-rw---- 1 root disk 8, 80 2012-08-23 14:56 /dev/sdf
brw-rw---- 1 root disk 8, 81 2012-08-23 15:10 /dev/sdf1

可以知道 /dev/sdf 或 /dev/sdf1 即為我要 mount 的來源位置. 

檢視檔案系統類型 : 
在使用 mount 命令時, 我們必須透過參數 -t 來說明來源的檔案系統類型, 大多數會是 ext3. 不過遺憾的是在這個地方不生效 : 
$ sudo mount -t ext3 /dev/sdf1 /johnext/
mount: wrong fs type, bad option, bad superblock on /dev/sdf1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

所以我們透過命令 fdisk 來看看它到底是哪種檔案系統類型 : 
$ sudo fdisk -l # 需要 sudo 來取得 root 的權限.
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
...(略)...
/dev/sdf1 2048 1953523119 976760536 7 HPFS/NTFS/exFAT

可以知道 /dev/sdf1 使用 NTFS(Id=7), 不同的 Id 對應的 File type 可以參考 這裡. 接著萬事俱備, 接著我們建立 /johnext 並要將該硬碟 mount 在哪 : 
$ mkdir /johnext # 建立 mount 點
$ sudo mount -t ntfs /dev/sdf1 /johnext # 將 /dev/sdf1 mount 在 /johnext 上.
$ ls /johnext/ # 檢視是否 mount 成功


補充說明 : 
鳥哥私房菜 : 第八章、Linux 磁碟與檔案系統管理 
使用 DF 查看檔案系統的格式 
NTFS-3G - Linux掛載NTFS,能讀能寫 
Linux 系統使用ntfs-3g 存取NTFS 硬碟 
Linux: umount 時 出現 "Device is busy" 的解法

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...