All submissions for this problem are available.
Gray codes were invented to prevent spurious output from electromagnetic switches. Consider the number 3 in ordinary binary encoding: when incremented from 011 to 100 it is unlikely that all of the switches will change states at the same time, so there is a possibility of reading the state of the variable while it is in between a transition (i.e., when some of the bits have flipped to the correct value but others haven't).
To solve this problem, Frank Gray introduced a new number system in which successive numbers differ by only a single bit.
You are working on a legacy system that uses Gray codes, but are having difficulty fixing a bug related to multiplication of numbers. For the purpose of debugging, you wish to convert a number's Gray encoding into the corresponding decimal representation.
The first line of input contains a single integer t, the number of test cases. This is followed by t lines, each containing a single integer, which is the result of converting the gray encoding directly to decimal. (For example, the gray encoding 100 which corresponds to 7 will be displayed as 4.)
For each test case, output a single integer that is the decimal equivalent of the gray encoding.
8 0 1 2 3 4 5 6 7
0 1 3 2 7 6 4 5
|Time Limit:||0.1 sec|
|Source Limit:||50000 Bytes|
|Languages:||C, JAVA, PYTH, PYTH 3.6, CS2, PAS fpc, PAS gpc, RUBY, PHP, GO, HASK, SCALA, D, PERL, FORT, WSPC, ADA, CAML, ICK, BF, ASM, CLPS, PRLG, ICON, SCM qobi, PIKE, ST, NICE, LUA, BASH, NEM, LISP sbcl, LISP clisp, SCM guile, JS, ERL, TCL, PERL6, TEXT, CLOJ, FS|
Fetching successful submissions