轉錄 [C++] how to think about OO

It is quite good article, it shows how to think about when you write OO code. And it may influence when you refactor your code.

http://googletesting.blogspot.com/2009/07/how-to-think-about-oo.html

Good sentence:
everyone always writes code that one day they will reuse it, but that day never comes, and when it does, usually the code is entangled in other ways anyway, so code reuse after the fact just does not happen. (developing a library is different since code reuse is an explicit goal.) My point is that a lot of people pay the price of “what if” but never get any benefit out of it.

Never think tooooo much which can lead to over design, I think we should refactor the code regularly!

Advertisements

Erlang] Message pass mechanism

You have to be very carefully about using message passing. The overhead is much more than you think. Here is the thing:

Comparition version:

A ———–> B
A <———– B

Original version:

A ———–> B
A <———– B
A <———– B

The original version can be almost ten times slower than comparition version!

[Erlang] Diffie-Hellman Key Exchange

Recently I try to write Diffie-Hellman key exchange protocol. And I found that Erlang’s library crypto provide  dh_generate_key & dh_compute_key API.


Types
byte() = 0 ... 255
ioelem() = byte() | binary() | iolist()
iolist() = [ioelem()]
Mpint() = <<ByteLen:32/integer-big, Bytes:ByteLen/binary>>
dh_generate_key(DHParams) -> {PublicKey,PrivateKey}
dh_generate_key(PrivateKey, DHParams) -> {PublicKey,PrivateKey}
  • DHParameters = [P, G]
  • P, G = Mpint Where P is the shared prime number and G is the shared generator.
  • PublicKey, PrivateKey = Mpint()

Generates a Diffie-Hellman PublicKey and PrivateKey (if not given).

dh_compute_key(OthersPublicKey, MyPrivateKey, DHParams) -> SharedSecret
  • DHParameters = [P, G]
  • P, G = Mpint Where P is the shared prime number and G is the shared generator.
  • OthersPublicKey, MyPrivateKey = Mpint()
  • SharedSecret = binary()

Computes the shared secret from the private key and the other party’s public key.

So this is how to use the API.

crypto:start().

% you can use dh_generate_parameters(512, 2) to generate

DHParams = [<<LenP:32, BinP/binary>>, <<LenG:32, BinG/binary>>],

{<<LenPub:32, PubKey/binary>>, PrivKey} =crypto:dh_generate_key(DHParams), SessionKey = crypto:dh_compute_key(<<LenA:32, BinA/binary>>,

PrivKey,  DHParams),

For more information, you can check /usr/local/lib/erlang/lib/crypto-1.6.3/src/crypto.erl

GCC extension

The following example will use typeof to build generic macro.
(void) (&_min2 == &_min2); // create compile warning when type is not match.

#define min(x, y) ({				\
	typeof(x) _min1 = (x);			\
	typeof(y) _min2 = (y);			\
	(void) (&_min1 == &_min2);		\
	_min1 < _min2 ? _min1 : _min2; })

Branch prediction hints

Branch prediction:

likely
macro!!

There are many optimization extensions in the article!

Reference:

GCC hacks in the Linux kernel

memcached info

Introduction:
http://blog.xdite.net/?p=1029

http://ihower.idv.tw/blog/archives/1768

 

Internal implementation:
http://tech.idv2.com/2008/07/10/memcached-001/

 

update data in DB:
If we delete data in memcached when we update data in DB, it will cause “cache stampedes”. It is if there are many clients find out the entry isn’t in memcached at the same time, all of they will try to query from DB, then it will cause performance issue. So we need to create a lock in memcached, every client will try to check whether the lock is exist. If it is not exist, it can query and delete it after finish, other clients will wait a period time to wait cache.

 

data fragment and for each item:
We need to make experienment for choosing cache data!

Now for this simple application, coding something like this is simply overkill, so let’s look at a more practical example: a photo sharing site that contains lists of photographs with links to detail pages.
Many such sites display paged lists, so 100 pictures in a list might be displayed as thumbnails with basic information, ten per page. Based on all of the coding we’ve done, it might seem reasonable to go get a list of one page of data from the database, cache the resulting list and then get the second page and cache that list and so on. But there’s another way to handle that.

Start by getting the list of all 100 items that make up the list and cache that. Build the paged lists from that information, rather than going back to the database for every page. Now the same list serves someone who only wants to see 5 items per page as well as some-one who wants to see 20, which is better than having to query the
database 20 times for one person and another 5 times for the other, when all we’re doing is showing the same data paged differently.

The most important thing to remember is that this is not an all-or-nothing approach. You may have data that lends itself to stor-ing nothing except lists of pointers, and then store the individual data items separately. But you might just as easily have data where it makes more sense to store lists that contain data. This is one of those things you need to experiment with and see which makes more sense for your application.

From Using Memcached

Advance:
http://blog.gslin.org/archives/2008/12/13/1884/

http://highscalability.com/blog/2009/10/26/facebooks-memcached-multiget-hole-more-machines-more-capacit.html

 

multiget hole:
Adding new memcached server didn’t help when it’s CPU bound. What it can happens is when you add more servers is that the number of requests is not reduced, only the number of keys in each request is reduced.
Ex: If you have send 50 requests to 2 memcached servers, and each request use multi_get to get 100 entries. So each memcached server will contain 50 entries. After you add one memcached server, the requests are still 50, but only each memcached contain 33 entries. Requests are not reduced. As a result we’ve done absolutely nothing to reduce the usage of our scarce resource which is CPU!
http://highscalability.com/blog/2009/10/26/facebooks-memcached-multiget-hole-more-machines-more-capacit.html

[Git] setting up environment

Ubuntu install Git

$sudo apt-get install git-core git-buildpackage

Environment setting

首先要先選定一台遠端的server A, 在server上面新增一個git user (ex: git)

$sudo adduser git

接著就

$ssh git@REMOTE_SERVER (server A)

#once login
$mkdir example.git
$cd example.git
$git –bare init

接著回到自己的local client

$mkdir example
$git init
$touch xx
$git add xx
$git commit -m ‘first commit’
$git remote add origin git@REMOTE_SERVER:example.git
$git push origin master

之後在local一樣git add, git commit, git push origin master

如果想要check out出來exist project

$git clone git@192.168.1.67:xxx.git

就可以了!

Reference: http://progit.org/book/ch4-4.html

[Unix] Debug method

No matter what kind of language you use, this method can provide a way to debug. At first you need to set run the command “ulimit -c unlimited”. Then just run your program, onece it segmentation fault it will generate core-dump file. And then you can run gdb to start debug!!!

gdb [options] [executable-file [core-file or process-id]]
ex:
gdb python core