Thursday, August 19, 2010

Software Testing

Developers will rarely like to work on software testing because it's boring. It isn't like creating code that requires creativities, but more like a repeating labor work. To help/enforce developer to overcome this laziness, with the power of Agile methodology, TDD (Test-Driven Development) requires you to write tests based on requirements before you can start actual coding.

Ok, Software Testing is not in my job description, but because there are no tester or QA in our development firm, every developer has to do his testing by himself, consequently we only have automated tests, and as SCMer I need formalize them as part of build automation, therefore I need to understand a bit more in it.

Software Testing is an complicated and time-consuming discipline, what I'm addressing here will not be entirely correct, but rather to be just my understanding at moment. Write them down to allow me to revisit and revise in the future.


All testing were done by our developers with all automated, hence they are all white box testing.

Unit testing verifies the functionality of a specific section of code at the function/class level, it is the most fundamental one, we use it from first year of computer science study. But unfortunately it is still the only test that some of our projects have, and in a few projects they don't even have a proper unit testing.

Unit testing frameworks help simplify the process of unit testing, it provides assertions, exception handling and other control flow mechanism that make your work of adding unit tests become relatively easy. We use Boost Test Library for C++ and NUnit for C#, some of our cool kids start to use google test.


One of the task I was working on is the Code Coverage, it tests how your source code been covered by your unit tests. In other word unit tests test code, and coverage test measure unit test. We use gcov for C++ projects (with lcov as an extension to generate reports), and NCover for C# projects.

Three common metric we get from code coverage are:
  • Function coverage - How many times each function in the program been called during exercising unit tests.
  • Statement coverage - How many times does each line been visited.
  • Branch coverage - Does every edge/decision/condition been executed?


Unit testing is a Dynamic testing, there are other Dynamic testing such as Integration Tests, System Test and Acceptance Tests. We don't have most of them, and due to the nature that our software are in-house only software, our end users are acting as tester and doing all these tests for us kindly.


Contrast to dynamic testing there is static testing, or as we called it - code review, primarily it is just to walkthrough the code manually. In my previous company we were required to have a "buddy check" before doing any commit, now in my current one we are not that restricted but people will still ask someone in the same team to review. This does help the collaboration by letting other team members to understand what your are working on, thereby improve both the overall quality of code and the developers' skills.

ReviewBoard is the tool that I deployed to help pre-commit review, it is a relatively new open source tool so don't expect to be that perfect. I also configured post-commit mailer for Subversion to emails code to reviewers automatically after checkin is made.


Static testing without human analysis is called static code analysis, this is performed by an automated tool. Coverity is one of the tool that identifies security vulnerabilities and code defects, some say it is a very powerful, but it is very expensive as well - charge by per line of code it analyse. I wonder if it's worth because I only had a glance on it, for next few days I will take some time to deploy and test it.

There are other two statis analysis tools we use for our C# projects, they are both from Microsoft and they are free (uhh?). FxCop analyse .NET programs that compile to CIL object code, and StyleCop analyse C# source code to enforce a set of style and rules. Although FxCop was running there for a long time and its result appeared to be similiar to what we got from Coverity, but no one had ever care about them. Because they are from Microsoft and they are free, hence most of our developers think they are weak? what a pity.


These are just the tip of the iceberg in the entire software testing empire, but they covered most of what SCMer need to know and take care with. If your boss ask you to do anything more than tests that can be automated, you should suggset him/her to hire some more experienced tester to setup a testing team.

Remember, as the hub of the wheel we help but we don't do everything.

Wednesday, August 4, 2010

Clash of Titans: G.A.M on!

從第一輪起到現在,多數挑戰者都是採用了G大神的Android系統,在單一手機上也許愛鳳無敵,但G神要像上個世紀M帝一樣使出人海戰術,讓喬幫主的歷史再一次重演。

生果的的iOS與G神的Android,就好比上個世紀的MacOS與Windows之爭。MacOS是個人電腦的奠基者,有著1984年的iPhone之稱,創意與風頭都是一時無兩。但是其系統只出在自己生産的硬件上,自己的硬件也無法運行其他的系統,軟硬沒有分離,被後來居上的Windows超過。M帝只做軟不做硬,與intel結成了wintel聯盟,除了生果集團的硬件廠商都用它的,最後以量取勝。

和Windows一樣,G神推出Android,創意沒做到,先機沒搶到,但是現在卻穩步搶奪市場,因爲更多的手機開發商採用,就有更多的機型可供選擇,就有更多的使用者,然後就會有更多的開發者,更多的内容,而後鋪天蓋地。生果一年推出一支愛鳳,以精品取勝,可據一方,但要以一敵百,霸佔市場,恐怕很難。

不知道喬幫主還會不會讓自己重新走上幾十年前的老路?Android反其道以多搶攻,之後會佔上風,但要達到Windows在個人電腦市場上那種壟斷,應該也不行。畢竟M帝的Windows Mobile現在仍有手機平臺不少的份額,比之個人電腦那種OS格局,手機OS今後更有可能是三分天下再加上Symbian、RIM等其他攪局的情況。

RIM雖然現在仍然是美國境内購買量最多的智能手機OS,但因爲它因爲也是種生果(黑莓)所以只在自己的手機上裝專屬的OS,而且其本來的出發點就過於企業化,不具有大衆效果,所以逐步下滑被iOS與Android吞噬份額。其最終形態應是像電腦有專作服務器OS那樣的繼續守住商務區的一畝三分田了。

Symbian作爲最早的智能手機系統、Nokia作爲最大的手機制造商,至今仍然佔據半數以上的智能手機市場,但竟被認爲跟不上時代的步伐,不禁讓人感嘆一代江山一代人,前浪死在不思進取上。至於其他的Linux based系統(Intel的MeeGo、Samsung的bada),沒有更具創新的點籽及過硬的實力,想要尋找夾縫,極其之難。


以前說生果代表的是硬件、M帝代表的是軟件、G神代表的是網絡,但現在這三者相互競爭,領域並不局限於自己擅長的,也逐漸侵蝕到對方的市場,手機平臺的較量,只是一部分。現在的戰爭,就是全面戰爭,請看下回--G.A.M: Total War。

Tuesday, August 3, 2010

Media Distortion

本人不是果粉,曾經追隨過M帝一段時間,但現在沒有,對G神的使用基本上也僅限於search engine上,所以相對中立。西方的傳統就是講究均勢制衡,不希望看到哪個集團坐大,這應該是源自歐洲一族一國的歷史。不像中國傳統上是期待統一化,都喜歡那種大國思想、大企業觀念,所以也容易產生壟斷。

說來以前M帝還是M帝的時候,可沒被媒體少批,一點小小的問題就會被無限放大。這是爲啥?他如果產品不好,光靠市場策略和行銷手段,還能佔據90%以上的世界市場不成。不過既然市場不喜歡見到被壟斷,媒體作爲消費者的指向標自然也要誤導一下,否則如何矯正?國内的主流媒體就不一樣,都是一味鼓吹。爲什麽?都被掌控了。

所以Media Distortion也不全是不好,校正偏移一下市場方向標,對整體也有好處,畢竟競爭才是第一創造力的動力來源。沒有火狐的出現,IE現在還是6。誇大修飾算是媒體一貫做法,這可以理解。不渲染一下,讀者沒有情感偏頗,誰還會感興趣呢。只是人類的平均本質上沒有良好的自我辨析能力,所以盲從與互噴也是這種群居社會的景象之一。不要以爲五毛黨與五美分黨只有中國才有,在西方他們只是換了一種形式存在而已。


這次生果愛鳳的天綫寶寶門事件,也是人類特性的一種表現手法。也許愛鳳設計上確實有問題生果沒有正面承認,也許普羅大衆太不能接受愛鳳沒像一貫的完美,但這事還是被媒體無限放大了。一來市場不喜歡被一方做大,二來互噴盲從被當作生活樂趣。結果如何?排隊的排隊,斷貨的斷貨,免費的套子算是這場喧嘩中最大的好處。

事實上從愛鳳出現以來,就不停的有愛鳳殺手們登場。記得以前看ChannelWeb,每代都有比較(括號内是贏的場次):
  1. 第一輪:iPhone 3G (6) vs T-Mobile G1 (5) vs BlackBerry Storm (3)
  2. 第二輪:iPhone 3GS (8) vs Palm Pre (6)
  3. 第三輪:iPhone 4 (10) vs HTC Evo 4G (6)
  4. 第四輪:iPhone 4 (9) vs Motorola Droid X (6)
就算媒體想要偏頗,但是這種硬性評分的結果還是顯示了公正,每一輪愛鳳都是贏家。有的輪看起來比分接近,不過現在誰還記得當初最大的殺手Palm Pre?還有那個G大神出的Nexus One,現在停產了。這兩個是我印象最深刻的殺手,而現在最出名的對手就是HTC Evo 4G和Motorola Droid X了,不過這兩個都沒在澳洲出,澳洲是HTC Desire與Samsung Galaxy S i9000。

在前兩輪,市場需要靠媒體來鼓吹一下以避免愛鳳風頭太盛,但現在對手越來越多,越來越多。以前是一年都難找個對手,現在一季就會出幾個競爭者。從單一手機來看愛鳳無敵手,但是雙拳難敵眾手,從整體市場來説生果卻很有可能再次重演歷史。請看下回--Clash of Titans: G.A.M on!